Sora Free Text To Video

v1.0.0

Get AI generated videos ready to post, without touching a single slider. Upload your text prompts (TXT, DOCX, PDF, plain text, up to 500MB), say something li...

0· 65·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for linmillsd7/sora-free-text-to-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Sora Free Text To Video" (linmillsd7/sora-free-text-to-video) from ClawHub.
Skill page: https://clawhub.ai/linmillsd7/sora-free-text-to-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install sora-free-text-to-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install sora-free-text-to-video
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill claims to generate videos from text and only requires an API token (NEMO_TOKEN) and uses endpoints on mega-api-prod.nemovideo.ai — this is proportional and expected for a cloud-render video service.
Instruction Scope
SKILL.md gives concrete API calls (session creation, SSE, upload, render/poll) and explicit headers; these are within the claimed purpose. It also instructs the agent to detect the install path to set an X-Skill-Platform header and to keep technical details out of chat — detecting install path is minor scope creep but understandable for attribution/telemetry.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is written to disk by an installer. This is the lowest-risk install model.
Credentials
Only NEMO_TOKEN is required which matches the API usage. However, the SKILL.md frontmatter references a config path (~/.config/nemovideo/) while the registry metadata listed 'Required config paths: none' — this inconsistency should be resolved because local config access could expose tokens/configs.
Persistence & Privilege
always:false and normal autonomous invocation are in place. The skill does not request persistent system-wide changes or modify other skills. It will send user uploads and prompts to the remote service as expected.
Assessment
This skill appears to do what it says: it will send your prompts and uploaded files to nemovideo.ai and needs a NEMO_TOKEN to authenticate (or it will obtain an anonymous starter token from their anonymous-token endpoint). Before installing: (1) confirm you trust nemovideo.ai with any content you upload (do not upload sensitive or confidential files), (2) prefer using a revocable/service-limited token rather than permanent credentials, (3) verify whether the skill will read ~/.config/nemovideo/ (registry metadata and SKILL.md disagree) and remove any sensitive tokens from that path if you don't want them used, and (4) expect network activity (session creation, SSE, uploads, and downloads) as part of normal operation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk9787ef24qzjbxexcn7rgfmc9s85c6zf
65downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your text prompts here or describe what you want to make.

Try saying:

  • "generate a short text description like 'a fox running through a snowy forest at sunset' into a 1080p MP4"
  • "generate a 10-second video of a city street at night with neon lights reflecting on wet pavement"
  • "generating videos from text descriptions without any footage for content creators, marketers, social media managers"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Sora Free Text to Video — Generate Videos From Text Prompts

Send me your text prompts and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a short text description like 'a fox running through a snowy forest at sunset', type "generate a 10-second video of a city street at night with neon lights reflecting on wet pavement", and you'll get a 1080p MP4 back in roughly 1-3 minutes. All rendering happens server-side.

Worth noting: shorter, more specific prompts tend to produce more accurate and consistent results.

Matching Input to Actions

User prompts referencing sora free text to video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is sora-free-text-to-video, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 10-second video of a city street at night with neon lights reflecting on wet pavement" — concrete instructions get better results.

Max file size is 500MB. Stick to TXT, DOCX, PDF, plain text for the smoothest experience.

Export as MP4 for widest compatibility across platforms and devices.

Common Workflows

Quick edit: Upload → "generate a 10-second video of a city street at night with neon lights reflecting on wet pavement" → Download MP4. Takes 1-3 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...