Video Lecture Maker Ai

v1.0.0

Get narrated lecture videos ready to post, without touching a single slider. Upload your slides or scripts (PDF, PPTX, MP4, MOV, up to 500MB), say something...

0· 78·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for vynbosserman65/video-lecture-maker-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Video Lecture Maker Ai" (vynbosserman65/video-lecture-maker-ai) from ClawHub.
Skill page: https://clawhub.ai/vynbosserman65/video-lecture-maker-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install video-lecture-maker-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install video-lecture-maker-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (video lecture generation) match the declared API endpoints and required NEMO_TOKEN. Minor mismatch: metadata declares a config path (~/.config/nemovideo/) that the runtime instructions never read or mention; this is plausibly harmless but unnecessary.
Instruction Scope
SKILL.md contains concrete API calls and a clear flow: use NEMO_TOKEN (or obtain an anonymous token), create a session, upload files, and start render jobs. The instructions do not request unrelated files, other environment variables, or system-wide data. One behavioral note: the skill instructs the agent to keep technical details out of chat, which is a UX choice but not a security issue by itself.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is downloaded or written to disk by the skill installer.
Credentials
Only a single credential (NEMO_TOKEN) is required, which is appropriate for a hosted API service. The declared config path (~/.config/nemovideo/) appears in metadata but is not used in instructions; this extra declaration is unnecessary and should be removed or explained by the publisher.
Persistence & Privilege
Skill is not set to always:true and does not request system-wide changes. The agent may be allowed to call the skill autonomously (platform default), which is expected for a user-invocable integration and is not by itself a red flag here.
Assessment
This skill appears internally consistent for a hosted video-rendering service: it needs a single service token (NEMO_TOKEN) and uploads user media to https://mega-api-prod.nemovideo.ai for processing. Before installing, verify you trust that backend (the skill has no homepage and an unknown source owner), because your uploaded files and the token will be sent to that service. The metadata lists a config path (~/.config/nemovideo/) that the instructions never use — ask the publisher to clarify or remove it. If you prefer not to share sensitive files or an existing long-lived token, either use the anonymous token flow (ephemeral 7-day credits) or avoid installing the skill. Finally, confirm what permissions NEMO_TOKEN grants on the provider side (ability to read/upload jobs, billing access, etc.) before supplying it.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎓 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97e978cm208nf53gkd1tkp7dd84p56g
78downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your slides or scripts here or describe what you want to make.

Try saying:

  • "create a 20-slide PowerPoint presentation on biology into a 1080p MP4"
  • "turn my slides into a narrated lecture video with AI voiceover"
  • "converting presentation slides into AI-narrated lecture videos for educators and online course creators"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

AI Video Lecture Maker — Turn Slides Into Lecture Videos

Send me your slides or scripts and describe the result you want. The AI lecture video creation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 20-slide PowerPoint presentation on biology, type "turn my slides into a narrated lecture video with AI voiceover", and you'll get a 1080p MP4 back in roughly 1-3 minutes. All rendering happens server-side.

Worth noting: breaking long lectures into 10-minute segments keeps viewers engaged and speeds up processing.

Matching Input to Actions

User prompts referencing video lecture maker ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcevideo-lecture-maker-ai
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "turn my slides into a narrated lecture video with AI voiceover" → Download MP4. Takes 1-3 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn my slides into a narrated lecture video with AI voiceover" — concrete instructions get better results.

Max file size is 500MB. Stick to PDF, PPTX, MP4, MOV for the smoothest experience.

Export as MP4 for widest compatibility with LMS platforms like Moodle or Canvas.

Comments

Loading comments...