Photo Video

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — turn these photos into a slideshow video with music and transitions — and...

0· 95·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/photo-video.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Photo Video" (peand-rover/photo-video) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/photo-video
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install photo-video

ClawHub CLI

Package manager switcher

npx clawhub@latest install photo-video
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (turn photos into slideshow videos) align with the declared requirement (NEMO_TOKEN) and the SKILL.md which documents photo uploads, session creation, render/export endpoints, and download URLs. No unrelated credentials or binaries are requested.
Instruction Scope
Instructions are focused on connecting to the nemovideo cloud API, creating an anonymous token if needed, uploading media, and controlling render jobs. They require reading this skill's frontmatter for attribution headers and detecting an install path to set X-Skill-Platform, which entails reading the skill's install location (reasonable for attribution but notable). The skill will upload user images to an external service (mega-api-prod.nemovideo.ai) — expected for this purpose but a privacy consideration.
Install Mechanism
Instruction-only skill with no install spec and no code files. This is the lowest-risk install model; nothing is downloaded or written by an installer.
Credentials
Only a single credential (NEMO_TOKEN) is required, which is appropriate for a cloud rendering API. The skill also documents how to obtain a short-lived anonymous token if NEMO_TOKEN is absent. It does not request unrelated secrets or config paths beyond an optional service config directory for attribution metadata.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It instructs the agent to persist a session_id for the render session (expected) and does not modify other skills or system-wide settings.
Scan Findings in Context
[no-code-files] expected: The regex-based scanner had no files to analyze because this is an instruction-only skill (SKILL.md). No additional findings were produced.
Assessment
This skill appears to do what it says: it uploads your photos to a third‑party rendering service (mega-api-prod.nemovideo.ai) and returns a video. Before installing, consider privacy and credential hygiene: use an anonymous token (the skill documents how to obtain one) rather than reusing any long‑lived or sensitive API keys; avoid uploading sensitive images if you don't want them sent offsite; confirm the service's retention and privacy policies if that matters; and be aware the skill will store short‑lived session IDs. The attribution header behavior requires reading the skill's install location — not malicious but worth noting. If you need stricter data control, don't provide a persistent NEMO_TOKEN and verify the service domain independently.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk974n5qnqz7whef43vxjf9frnn84nn6g
95downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Share your photos or images and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "turn my photos or images"
  • "export 1080p MP4"
  • "turn these photos into a slideshow"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Photo Video — Turn Photos Into Shareable Videos

This tool takes your photos or images and runs AI video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have five vacation photos in JPG format and want to turn these photos into a slideshow video with music and transitions — the backend processes it in about 30-60 seconds and hands you a 1080p MP4.

Tip: fewer photos with longer durations per image tend to look more polished.

Matching Input to Actions

User prompts referencing photo video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: photo-video
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these photos into a slideshow video with music and transitions" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, HEIC, WEBP for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "turn these photos into a slideshow video with music and transitions" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...