Free Video Letter Maker

v1.0.0

Turn a short personal message and one photo into 1080p personalized video letters just by typing what you need. Whether it's turning written messages into sh...

0· 90·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for francemichaell-15/free-video-letter-maker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Free Video Letter Maker" (francemichaell-15/free-video-letter-maker) from ClawHub.
Skill page: https://clawhub.ai/francemichaell-15/free-video-letter-maker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install free-video-letter-maker

ClawHub CLI

Package manager switcher

npx clawhub@latest install free-video-letter-maker
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (video-letter creation) match the required credential (NEMO_TOKEN) and the API endpoints described in SKILL.md. One minor inconsistency: the registry metadata shown earlier lists no config paths, while the skill frontmatter inside SKILL.md advertises a config path (~/.config/nemovideo/) — that is plausible (used to find stored tokens) but should have been declared consistently.
Instruction Scope
Instructions stay inside the stated purpose: they establish a session, upload files, stream SSE, poll renders, and return download URLs. The skill tells the agent how to obtain an anonymous token if NEMO_TOKEN is missing, how to upload local files (multipart @/path), and how to read/poll session state. Two items to note: (1) it requires adding attribution headers that reference the skill frontmatter/version and asks to auto-detect an install path for X-Skill-Platform — that may require the agent to inspect its environment/install path (or fall back to 'unknown'); (2) SKILL.md suggests reading ~/.config/nemovideo/ (frontmatter) which is reasonable to locate stored tokens but is not declared in the registry-level requirements earlier. Both are scope-adjacent but explainable for this type of integration.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest-risk install surface. Nothing is downloaded or executed by an installer.
Credentials
Only a single credential (NEMO_TOKEN) is declared as required and is appropriate for a cloud service. The runtime also describes how to obtain a short-lived anonymous token if none is provided, which is consistent with the stated free-tier behavior. There are no unrelated secret variables requested.
Persistence & Privilege
always is false and the skill does not request elevated or system-wide persistence. It does not attempt to modify other skills or global agent settings. The only potential persistent read is the optional config path (~/.config/nemovideo/) to locate tokens, which is reasonable for a client that can use locally stored credentials.
Assessment
This skill appears to do exactly what it says: it will upload media and text to a nemovideo cloud API to produce rendered videos. Before installing or using it, consider: (1) You are trusting the external domain (mega-api-prod.nemovideo.ai) with any files and the NEMO_TOKEN — don't send sensitive PII or corporate secrets. (2) The skill can look in ~/.config/nemovideo/ (per its frontmatter) to find tokens; if you keep tokens there and don't want them used, remove or lock that file. (3) If you don't have a NEMO_TOKEN the agent will request an anonymous token from the service (expected behavior). (4) The SKILL.md and registry metadata disagree about config paths — harmless in itself but worth noting. If you need stronger guarantees, ask the skill author for a privacy policy or an explicit list of endpoints and headers, or run this skill in an environment where you control what files and tokens are accessible.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✉️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97ew6bjzvh3rkyp4h0z4b82yn84ndw8
90downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Share your text or photos and I'll get started on AI video letter creation. Or just tell me what you're thinking.

Try saying:

  • "create my text or photos"
  • "export 1080p MP4"
  • "turn my written message into a"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Free Video Letter Maker — Turn Messages Into Video Letters

This tool takes your text or photos and runs AI video letter creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a short personal message and one photo and want to turn my written message into a video letter with music and animated text — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: keep your written message under 150 words for the cleanest on-screen text pacing.

Matching Input to Actions

User prompts referencing free video letter maker, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcefree-video-letter-maker
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn my written message into a video letter with music and animated text" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn my written message into a video letter with music and animated text" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, JPG, PNG for the smoothest experience.

Export as MP4 for widest compatibility across email, social, and messaging apps.

Comments

Loading comments...