Descript Text To Video

v1.0.0

convert text script into AI-generated videos with this skill. Works with TXT, DOCX, PDF, SRT files up to 50MB. content creators use it for converting written...

0· 76·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/descript-text-to-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Descript Text To Video" (peand-rover/descript-text-to-video) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/descript-text-to-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install descript-text-to-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install descript-text-to-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill converts text into cloud-rendered videos and only requests a single service credential (NEMO_TOKEN) for nemo/videocloud endpoints. That credential is proportionate to the described functionality. One minor inconsistency: the SKILL.md metadata lists a config path (~/.config/nemovideo/) while the registry summary reported no required config paths.
Instruction Scope
Runtime instructions direct the agent to create sessions, upload files, use SSE, poll renders, and post to https://mega-api-prod.nemovideo.ai — all consistent with a remote rendering service. The skill also instructs the agent to read this file's YAML frontmatter for attribution headers and to detect an install path (~/.clawhub or ~/.cursor/skills/) which requires inspecting paths in the user home; reading its own SKILL.md is expected, but automatic probing of install paths or config directories could access more of the user's filesystem than strictly necessary.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest-risk install surface. No external downloads or package installs are requested.
Credentials
Only a single credential (NEMO_TOKEN) is required. The SKILL.md provides a fallback flow to obtain an anonymous token from the service if no token is present. No unrelated secrets or broad credential access are requested.
Persistence & Privilege
Skill is not force-included (always:false) and does not request persistent system-level privileges. Normal autonomous invocation is allowed (disable-model-invocation:false) which is expected for skills; nothing in the instructions claims to modify other skills or global agent settings.
Assessment
This skill appears to do what it says: it will upload scripts and media to mega-api-prod.nemovideo.ai and use a NEMO_TOKEN (or an anonymous token it fetches) to create render jobs. Before installing, confirm you trust the nemovideo.ai domain for processing uploads (don’t send sensitive PII or secrets in files). Note the SKILL.md hints that the agent may probe install paths and a ~/.config/nemovideo/ path — ask how your agent runtime restricts skill file access if you’re concerned about exposing other files. Also verify the skill source/owner (no homepage provided); if you need stronger assurance, request a skill from a known vendor or one with a reachable homepage and source repository.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9749dqtfvc1qkmbyjq4x698r985bfj4
76downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Getting Started

Share your text script and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "convert my text script"
  • "export 1080p MP4"
  • "turn this script into a video"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Descript Text to Video — Convert Scripts into Finished Videos

Drop your text script in the chat and tell me what you need. I'll handle the AI video creation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a 200-word blog post intro, ask for turn this script into a video with visuals and captions, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter scripts under 150 words produce tighter, more focused videos.

Matching Input to Actions

User prompts referencing descript text to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: descript-text-to-video
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this script into a video with visuals and captions" — concrete instructions get better results.

Max file size is 50MB. Stick to TXT, DOCX, PDF, SRT for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "turn this script into a video with visuals and captions" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...