Ai Video Maker For Marketing

v1.0.0

Turn three product photos and a logo file into 1080p polished marketing videos just by typing what you need. Whether it's creating short promotional videos f...

0· 50·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mory128/ai-video-maker-for-marketing.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ai Video Maker For Marketing" (mory128/ai-video-maker-for-marketing) from ClawHub.
Skill page: https://clawhub.ai/mory128/ai-video-maker-for-marketing
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-video-maker-for-marketing

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-video-maker-for-marketing
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description describe a cloud video-rendering workflow and the skill only requests a single service credential (NEMO_TOKEN) and uploads media — this is proportionate. Minor inconsistency: the SKILL.md frontmatter mentions a config path (~/.config/nemovideo/) while the registry metadata lists no config paths; that could indicate the skill expects local config but the registry entry didn't declare it.
Instruction Scope
Runtime instructions are explicit and limited to: using NEMO_TOKEN (or obtaining an anonymous token via a POST to the provider), creating a session, uploading media, interacting with SSE/chat endpoints, and polling export status. These actions are consistent with making a cloud render service work. Small scope creeping items: the skill asks the agent to derive an X-Skill-Platform header by checking install paths (which implies probing known paths) and the frontmatter references a config path — both could require filesystem checks. The instructions do not request arbitrary file reads or unrelated environment variables.
Install Mechanism
Instruction-only skill with no install spec and no code files — minimal disk footprint and no downloads. This is the lowest-risk install posture.
Credentials
Only one credential is required: NEMO_TOKEN (declared as the primary credential). That is reasonable for a cloud API. Note: SKILL.md metadata references a local config path that could contain credentials or session state; confirm whether the skill will read/write that path and whether you want tokens stored there.
Persistence & Privilege
The skill is not marked always:true and it is user-invocable. It can be invoked autonomously (default), which is normal for skills; it does not request elevated system persistence or alter other skills’ configs.
Assessment
This skill appears to do what it says: it uploads your photos/clips to nemo's cloud rendering API and returns a download URL. Before installing or using it: (1) Verify you trust the domain (mega-api-prod.nemovideo.ai) and are comfortable sending your images/audio to that service; (2) Decide whether to set a persistent NEMO_TOKEN in your environment or let the skill obtain an anonymous token (anonymous tokens have limited credits/expiry); (3) Ask the author to clarify the referenced config path (~/.config/nemovideo/) — confirm whether tokens or session data will be written there; (4) If you will use sensitive imagery, avoid storing long-lived credentials in the environment and review the provider's privacy terms. If you want, I can draft specific questions to send to the skill author (about config path usage, token lifecycle, and data retention).

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📣 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk978xrtwhwdm45ykvga5fp7gyx85jehh
50downloads
0stars
1versions
Updated 2d ago
v1.0.0
MIT-0

Getting Started

Share your images or clips and I'll get started on AI marketing video creation. Or just tell me what you're thinking.

Try saying:

  • "create my images or clips"
  • "export 1080p MP4"
  • "turn these product images into a"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

AI Video Maker for Marketing — Create and Export Marketing Videos

This tool takes your images or clips and runs AI marketing video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have three product photos and a logo file and want to turn these product images into a 30-second promotional video with text overlays and background music — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: square or vertical formats work great for social ads — specify your target platform before rendering.

Matching Input to Actions

User prompts referencing ai video maker for marketing, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is ai-video-maker-for-marketing, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn these product images into a 30-second promotional video with text overlays and background music" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, PNG, JPG for the smoothest experience.

Export as MP4 for widest compatibility across ad platforms like Meta and Google Ads.

Common Workflows

Quick edit: Upload → "turn these product images into a 30-second promotional video with text overlays and background music" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...