Shorts Repurposer

v1.0.0

Get vertical short clips ready to post, without touching a single slider. Upload your long-form videos (MP4, MOV, AVI, WebM, up to 500MB), say something like...

0· 92·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whitejohnk-26/shorts-repurposer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Shorts Repurposer" (whitejohnk-26/shorts-repurposer) from ClawHub.
Skill page: https://clawhub.ai/whitejohnk-26/shorts-repurposer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install shorts-repurposer

ClawHub CLI

Package manager switcher

npx clawhub@latest install shorts-repurposer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (convert long videos into vertical shorts) matches the actions the SKILL.md describes: creating a session, uploading videos, requesting renders, and returning download URLs. Requesting a NEMO_TOKEN and reading a nemovideo config path is coherent for a cloud video processing service. Note: the registry metadata at the top of the report listed no required config paths, but the SKILL.md frontmatter declares a configPaths entry (~/.config/nemovideo/); this is an internal inconsistency in the skill metadata but does not contradict the stated purpose.
Instruction Scope
All instructions are tightly scoped to interacting with the nemovideo backend (auth, session creation, SSE, upload, export, state/credits endpoints). The skill will upload user-provided videos and poll for render status — this is expected behavior. The skill also instructs the agent to detect its install path and read the skill's YAML frontmatter to populate attribution headers; this is reasonable for header population but does require reading the skill file and checking common install directories. Important: user videos and metadata are transmitted to an external domain (mega-api-prod.nemovideo.ai). The instructions explicitly say to avoid exposing tokens in UI output.
Install Mechanism
This is an instruction-only skill with no install spec and no code files — nothing is downloaded or written by an installer. That is the lowest-risk install pattern.
Credentials
The only required environment credential is NEMO_TOKEN (declared as primaryEnv) which is proportionate: the API requires Bearer auth. The skill also supports creating a short-lived anonymous token via an API call if NEMO_TOKEN is not present. No other unrelated secrets or credentials are requested.
Persistence & Privilege
The skill does not request 'always: true' and has no install-time persistence. It creates sessions on the remote service for render jobs (normal for this functionality) but does not request system-wide privileges or to modify other skills' configs.
Assessment
This skill will upload any video you provide to a third-party cloud service (mega-api-prod.nemovideo.ai) for processing and will use a NEMO_TOKEN (either one you provide or a short-lived anonymous token obtained from the service). Before installing/using it: 1) Do not upload sensitive or private videos unless you trust the service/privacy policy. 2) If you must use a token, prefer a scoped/ephemeral token rather than a long-lived personal credential. 3) Note the metadata mismatch: the frontmatter mentions ~/.config/nemovideo/ while the registry summary did not; confirm what local config (if any) the skill will try to read. 4) Because the skill's source/homepage are unknown, consider verifying the nemovideo domain and service independently (privacy, terms, data retention) or contacting the provider before sending production or private content.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

✂️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk977pqqgztp58v8xbcfcsa921584nv3s
92downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Share your long-form videos and I'll get started on AI shorts conversion. Or just tell me what you're thinking.

Try saying:

  • "convert my long-form videos"
  • "export 1080p MP4"
  • "cut this into 3 vertical shorts"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

Shorts Repurposer — Convert Long Videos Into Shorts

Drop your long-form videos in the chat and tell me what you need. I'll handle the AI shorts conversion on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a 10-minute YouTube tutorial video, ask for cut this into 3 vertical shorts under 60 seconds each with captions, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — videos with clear scene changes produce better auto-cut results.

Matching Input to Actions

User prompts referencing shorts repurposer, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: shorts-repurposer
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "cut this into 3 vertical shorts under 60 seconds each with captions" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "cut this into 3 vertical shorts under 60 seconds each with captions" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across TikTok, Reels, and YouTube Shorts.

Comments

Loading comments...