Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Photo Video Ai

v1.0.0

convert photos and videos into animated photo videos with this skill. Works with JPG, PNG, MP4, MOV files up to 500MB. social media creators use it for turni...

0· 100·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mory128/photo-video-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Photo Video Ai" (mory128/photo-video-ai) from ClawHub.
Skill page: https://clawhub.ai/mory128/photo-video-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install photo-video-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install photo-video-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name and description (convert photos/videos to videos) line up with requiring a NEMO_TOKEN and calling a remote rendering API. However, the SKILL.md frontmatter declares a required config path (~/.config/nemovideo/) and the running instructions expect to detect install paths (~/.clawhub/, ~/.cursor/skills/) and to read the file's YAML frontmatter at runtime. The registry metadata provided to the evaluator did not list any required config paths — that inconsistency suggests the skill's declared capabilities/requirements in the catalog are out-of-sync with the runtime instructions.
!
Instruction Scope
Instructions explicitly tell the agent to: read NEMO_TOKEN from the environment (expected); generate an anonymous token from the vendor endpoint if no env token exists (expected); create sessions, upload files (multipart uploads or URLs), poll render status, and include custom attribution headers. They also instruct detecting the agent's install path (~/.clawhub/ or ~/.cursor/skills/) and reading the SKILL.md YAML frontmatter at runtime to populate X-Skill-Version. Detecting/install-paths and reading local files is filesystem access beyond the simple API integration and is not declared in the registry metadata. If an agent has access to arbitrary local paths, these instructions could cause it to probe the filesystem for those locations.
Install Mechanism
No install spec and no code files — instruction-only skill. This minimizes write-to-disk and supply-chain risk; nothing is being downloaded or installed by the skill itself.
Credentials
The skill only requires one credential (NEMO_TOKEN) as primaryEnv, which is proportionate to calling a third-party render API. The instructions also create an anonymous token if NEMO_TOKEN is not present (POST to vendor endpoint). There are no additional unrelated secrets requested. Users should verify what privileges the NEMO_TOKEN grants on the vendor side (account access, billing, long-lived operations).
Persistence & Privilege
always:false and normal autonomous invocation settings. The skill does not request to be forced-always-enabled and does not provide install scripts or attempt to modify other skills or system-wide settings in the provided instructions.
What to consider before installing
This skill appears to do what it says: call a remote service to render videos and it needs a single API token (NEMO_TOKEN). Before installing, consider: 1) The SKILL.md instructs the agent to detect install paths (~/.clawhub/, ~/.cursor/skills/) and read YAML frontmatter — that means the agent may probe some local paths; make sure you are comfortable with the agent accessing those locations. 2) Only provide a NEMO_TOKEN scoped to the minimum necessary privileges (or let the skill use its anonymous token flow if you prefer ephemeral access). 3) Confirm the vendor domain (mega-api-prod.nemovideo.ai) and privacy/billing implications for uploading media. 4) The registry metadata and the SKILL.md disagree about required config paths — this could be an out-of-date metadata issue; ask the publisher or inspect the skill instructions in full if you need higher assurance. If you require stricter limits, avoid setting a persistent NEMO_TOKEN and restrict the agent's filesystem/network permissions when running the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🖼️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97dk4gf97mm8f55ef2ev7gmk1855ja6
100downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your photos and videos here or describe what you want to make.

Try saying:

  • "convert five product photos and a background music file into a 1080p MP4"
  • "turn my photos into a smooth video slideshow with transitions and music"
  • "turning photo collections into shareable videos for social media creators"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Photo Video AI — Turn Photos Into Shareable Videos

Send me your photos and videos and describe the result you want. The AI video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload five product photos and a background music file, type "turn my photos into a smooth video slideshow with transitions and music", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: using 10 or fewer photos keeps processing fast and the video concise.

Matching Input to Actions

User prompts referencing photo video ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: photo-video-ai
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn my photos into a smooth video slideshow with transitions and music" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn my photos into a smooth video slideshow with transitions and music" — concrete instructions get better results.

Max file size is 500MB. Stick to JPG, PNG, MP4, MOV for the smoothest experience.

Export as MP4 for widest compatibility across all social platforms.

Comments

Loading comments...