Free Shorts

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — cut this into a 60-second short for TikTok with captions — and get short v...

0· 116·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vcarolxhberger/free-shorts.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Free Shorts" (vcarolxhberger/free-shorts) from ClawHub.
Skill page: https://clawhub.ai/vcarolxhberger/free-shorts
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install free-shorts

ClawHub CLI

Package manager switcher

npx clawhub@latest install free-shorts
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (create short videos) lines up with the API endpoints, upload/render/export workflow, and the single required credential (NEMO_TOKEN). The declared ability to upload and render remotely is coherent with the listed endpoints and SSE usage.
Instruction Scope
SKILL.md instructs the agent to create sessions, upload user media, stream SSE responses, poll render status, and download URLs — all within the stated purpose. Two items worth noting: (1) it instructs the agent to "keep the technical details out of the chat," meaning API calls and tokens may be hidden from users; (2) it asks to read the skill's YAML frontmatter and probe specific install paths to set an X-Skill-Platform header. Those filesystem checks are plausible but expand scope beyond pure API usage and reduce transparency.
Install Mechanism
Instruction-only skill with no install spec or code files — lowest install risk. Nothing is downloaded or written by an installer in the manifest.
Credentials
Only a single credential (NEMO_TOKEN) is required which is proportional to a cloud API client. However, the frontmatter also lists a config path (~/.config/nemovideo/) that the registry metadata did not declare — this mismatch suggests the skill might attempt to read local config if present. Supplying a full-account NEMO_TOKEN grants the skill the same privileges as that account (including credit usage or access to account data), so use caution.
Persistence & Privilege
always:false and default invocation settings. The skill does not request permanent platform-wide presence or other skills' settings. Session jobs are server-side; the skill warns jobs are orphaned if the client disconnects.
Assessment
This skill appears to do what it says: it uploads your videos to a remote nemovideo.ai service, creates a session, runs edits on cloud GPUs, and returns a download URL. Before installing or supplying NEMO_TOKEN: (1) only provide a token you trust — the token can be used to access and act under your account (consume credits, view account data); prefer a throwaway/test token if possible; (2) be aware uploads are sent to an external service (nemovideo.ai) — do not upload sensitive/private footage without checking the provider's privacy/retention policy; (3) the skill can generate an anonymous token if none is supplied, which still authenticates actions under that anonymous session; (4) the manifest frontmatter references a local config path (~/.config/nemovideo/) and install-path detection that the registry did not declare — ask the author why the skill may read local config and which files it will access; (5) if you need transparency, ask the maintainer to remove the instruction to "keep technical details out of the chat" so the agent reports API activity, and consider testing with non-sensitive samples first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97f7frvbcdv5mzdk2tht0ewj18545kh
116downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your video clips and I'll get started on AI short video creation. Or just tell me what you're thinking.

Try saying:

  • "generate my video clips"
  • "export 1080p MP4"
  • "cut this into a 60-second short"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Free Shorts — Create and Export Short Videos

Send me your video clips and describe the result you want. The AI short video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 10-minute YouTube video recording, type "cut this into a 60-second short for TikTok with captions", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: vertical 9:16 video works best for Reels and TikTok uploads.

Matching Input to Actions

User prompts referencing free shorts, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: free-shorts
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Common Workflows

Quick edit: Upload → "cut this into a 60-second short for TikTok with captions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "cut this into a 60-second short for TikTok with captions" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Comments

Loading comments...