Highlight Editor Photo

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a highlight reel with music and transitions — and g...

0· 81·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tk8544-b/highlight-editor-photo.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Highlight Editor Photo" (tk8544-b/highlight-editor-photo) from ClawHub.
Skill page: https://clawhub.ai/tk8544-b/highlight-editor-photo
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install highlight-editor-photo

ClawHub CLI

Package manager switcher

npx clawhub@latest install highlight-editor-photo
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill name/description (photo → highlight video) aligns with requiring a NEMO_TOKEN to call the nemovideo.ai APIs. Minor inconsistency: the registry metadata shown to you listed no required config paths, but the SKILL.md frontmatter (metadata) declares a configPaths entry (~/.config/nemovideo/). This is a small mismatch in metadata but does not change the core purpose.
Instruction Scope
SKILL.md instructs the agent to: use NEMO_TOKEN or obtain an anonymous token, create sessions, upload files, use SSE for streaming responses, poll render status, and return download URLs. Those actions are appropriate for a remote render service. The skill also instructs reading its own YAML frontmatter and checking the agent's install path to set X-Skill-Platform attribution headers — this requires the agent to inspect local filesystem paths (e.g., ~/.clawhub/, ~/.cursor/skills/) which is not strictly necessary for video rendering but is intended for attribution. Expect the skill to transmit your uploaded images/videos and metadata to https://mega-api-prod.nemovideo.ai.
Install Mechanism
There is no install spec and no code files — instruction-only. No files are downloaded or extracted by the skill itself, so installation risk is minimal.
Credentials
The only required credential is NEMO_TOKEN (declared as primary), which matches the described API usage. The skill will also optionally mint an anonymous token if no token is present. It does not ask for unrelated secrets or other environment variables.
Persistence & Privilege
always:false and no install-time persistence are set. The skill does not request to modify other skills or system-wide config; it only instructs creating remote sessions with the service.
Assessment
This skill appears to do what it claims: it will upload the photos you provide to nemovideo.ai and return rendered video URLs. Before installing or using it, consider: (1) Privacy: your images are sent to https://mega-api-prod.nemovideo.ai — check that service's privacy and retention policy if you care about sensitive content. (2) Credential choice: only provide a NEMO_TOKEN if you trust the service; otherwise the skill will obtain an anonymous token for you (which still uploads your files). (3) Local inspection: the skill may read its own frontmatter and probe typical install paths to set attribution headers — this can reveal limited environment/install information; if you prefer not to disclose that, avoid installing. (4) Metadata mismatch: the SKILL.md includes a configPaths entry while the registry metadata did not — this is likely a minor packaging inconsistency but worth noting. If you want stronger assurance, ask the skill author for a privacy/data-retention statement and for clarification about the configPath declaration.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk971z9jtb5yczrzcz22nha45vs84sxjh
81downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Share your photos or images and I'll get started on AI highlight video creation. Or just tell me what you're thinking.

Try saying:

  • "turn my photos or images"
  • "export 1080p MP4"
  • "turn these photos into a highlight"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Highlight Editor Photo — Turn Photos Into Highlight Videos

Send me your photos or images and describe the result you want. The AI highlight video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload ten vacation photos from a phone gallery, type "turn these photos into a highlight reel with music and transitions", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: fewer than 20 photos process fastest and keep the highlight reel tight.

Matching Input to Actions

User prompts referencing highlight editor photo, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: highlight-editor-photo
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a highlight reel with music and transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, HEIC, WEBP for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "turn these photos into a highlight reel with music and transitions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...