Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Midjourney Video Generator Free

v1.0.0

Get AI-generated video clips ready to post, without touching a single slider. Upload your text prompts (MP4, MOV, WebM, GIF, up to 500MB), say something like...

0· 86·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/midjourney-video-generator-free.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Midjourney Video Generator Free" (dsewell-583h0/midjourney-video-generator-free) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/midjourney-video-generator-free
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install midjourney-video-generator-free

ClawHub CLI

Package manager switcher

npx clawhub@latest install midjourney-video-generator-free
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to be 'Midjourney Video Generator', but all runtime instructions target a different service (mega-api-prod.nemovideo.ai) and require a NEMO_TOKEN. The name implies integration with Midjourney while the implementation uses Nemo's API — a branding/misrepresentation mismatch. The required NEMO_TOKEN itself is proportional to the stated backend, but the marketing name is misleading.
!
Instruction Scope
Instructions direct the agent to obtain/use NEMO_TOKEN, create sessions, stream SSE, upload user files (multipart or URL), poll render endpoints, and return download URLs — all expected for remote rendering. Concerningly, the skill also instructs reading the YAML frontmatter of the skill file for attribution and detecting the agent install path (~/.clawhub, ~/.cursor/skills) to set X-Skill-Platform. That requires filesystem inspection outside the declared configPaths and is not strictly necessary for generating videos. The skill also instructs to keep technical details out of the chat, which could obscure what is being sent to the backend.
Install Mechanism
No install spec or code is present (instruction-only). No downloads or binaries are requested, so there's no installed code footprint from the skill itself.
Credentials
Only NEMO_TOKEN is declared as required, which matches the API-based workflow. However, the skill will upload user media (up to 500MB) and potentially read install paths / its own SKILL.md frontmatter; users should understand that tokens and uploaded files are transmitted to https://mega-api-prod.nemovideo.ai and that anonymous tokens generated by the skill give the service 7‑day access. If you supply a persistent NEMO_TOKEN, that credential will be usable by the backend — ensure you trust the service.
Persistence & Privilege
The skill does not request always:true and does not require elevated or permanent platform privileges. It does suggest reading local paths for attribution, but it does not attempt to modify other skills or system configuration.
What to consider before installing
This skill will upload user media and session/auth tokens to an external domain (mega-api-prod.nemovideo.ai). Two things to note before installing or using it: (1) The skill name references 'Midjourney' but the runtime calls a different service (Nemo) — this may be misleading; confirm you trust nemovideo.ai and its privacy/TOS. (2) Using the skill will transmit files you upload (videos, GIFs, etc.) and/or a NEMO_TOKEN credential to that service; avoid uploading sensitive content and avoid providing long‑lived credentials unless you trust the provider. Ask the author for a homepage, source code, or privacy policy; prefer using your own NEMO_TOKEN over allowing the skill to mint anonymous tokens; and consider testing with non-sensitive content or in an isolated environment. If the author clarifies that this is an official Midjourney integration (with documentation or source) or changes the branding to match Nemo, my confidence would increase.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97fxdbzt9cvmm3pfjg5qh6jh5859zwp
86downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Getting Started

Got text prompts to work with? Send it over and tell me what you need — I'll take care of the AI video generation.

Try saying:

  • "generate a short text description like 'a fox running through a forest at sunset' into a 1080p MP4"
  • "generate a 10-second cinematic video clip from my text prompt"
  • "generating short videos from text prompts without paid tools for content creators and social media users"

Getting Connected

Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".

If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:

  • Generate a UUID as client identifier
  • POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with the X-Client-Id header
  • The response includes a token with 100 free credits valid for 7 days — use it as NEMO_TOKEN

Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.

Tell the user you're ready. Keep the technical details out of the chat.

Midjourney Video Generator Free — Generate AI Videos From Text

Send me your text prompts and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a short text description like 'a fox running through a forest at sunset', type "generate a 10-second cinematic video clip from my text prompt", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter, more specific prompts tend to produce more accurate and consistent results.

Matching Input to Actions

User prompts referencing midjourney video generator free, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: midjourney-video-generator-free
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 10-second cinematic video clip from my text prompt" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, WebM, GIF for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and devices.

Common Workflows

Quick edit: Upload → "generate a 10-second cinematic video clip from my text prompt" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...