Ai Text To Free

v1.0.0

Turn a 200-word blog post or article text into 1080p free shareable videos just by typing what you need. Whether it's converting written text into videos for...

0· 63·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is presented as a cloud-based text→video service and all required behavior (session creation, uploads, render requests, credit checks) maps to that purpose. Requiring a NEMO_TOKEN as the primary credential is proportional. Minor inconsistency: the SKILL.md frontmatter lists a configPaths entry (~/.config/nemovideo/) while the registry metadata claimed no required config paths.
Instruction Scope
Instructions stay within the domain of driving the remote Nemovideo API (auth, session, SSE, upload, render, export). They also instruct generating a UUID for anonymous auth and including attribution headers derived from local install paths (detecting ~/.clawhub/, ~/.cursor/skills/, etc.). Detecting install path and including it in headers implies reading local filesystem paths for telemetry — this is related to attribution but is out-of-band relative to pure text→video functionality.
Install Mechanism
No install spec and no code files — instruction-only skill. Nothing is written to disk or fetched at install time.
Credentials
Only one credential (NEMO_TOKEN) is declared, which matches the described API usage. The skill will obtain an anonymous token from the service if no token is present (by POSTing to the service's anonymous-token endpoint). The frontmatter's configPaths declaration (not reflected in the registry) suggests optional local config access; this should be clarified.
Persistence & Privilege
always is false and the skill does not request persistent system-level privileges or modifications to other skills. Autonomous invocation is allowed (platform default) but not combined with other broad privileges here.
Assessment
This skill appears to do what it claims: it drives a remote Nemovideo API to convert text into videos and only needs a NEMO_TOKEN. Before installing, consider: 1) Only provide a dedicated, limited-scope NEMO_TOKEN (or use the anonymous flow) — do not supply production or broad-scope tokens for other services. 2) The skill may read your home/install paths to populate attribution headers (this can reveal basic environment info); if you want to avoid that, ask the skill author to omit or document that behavior. 3) Confirm you trust the external domain (https://mega-api-prod.nemovideo.ai) before sending private content, since uploads and rendered output go there. 4) Note the SKILL.md frontmatter and registry metadata disagree about required config paths; ask the author to clarify. Overall the footprint is coherent, but validate token scope and the external service's privacy/retention policies before use.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📝 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk978w39vhwbw0fymd3ec8efbgd84n03r
63downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Got text script to work with? Send it over and tell me what you need — I'll take care of the AI video creation.

Try saying:

  • "convert a 200-word blog post or article text into a 1080p MP4"
  • "convert this text into a free shareable video with visuals and voiceover"
  • "converting written text into videos for free for content creators, bloggers, marketers"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

AI Text to Free Video — Convert Text into Shareable Videos

This tool takes your text script and runs AI video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a 200-word blog post or article text and want to convert this text into a free shareable video with visuals and voiceover — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: shorter text inputs under 150 words produce the fastest and most focused videos.

Matching Input to Actions

User prompts referencing ai text to free, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is ai-text-to-free, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "convert this text into a free shareable video with visuals and voiceover" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "convert this text into a free shareable video with visuals and voiceover" — concrete instructions get better results.

Max file size is 500MB. Stick to TXT, DOCX, PDF, copied text for the smoothest experience.

Export as MP4 for widest compatibility across all social platforms.

Comments

Loading comments...