Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Bing Video Creator Ai

v1.0.0

Cloud-based bing-video-creator-ai tool that handles generating videos from text prompts or images. Upload MP4, MOV, PNG, JPG files (up to 200MB), describe wh...

0· 82·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/bing-video-creator-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Bing Video Creator Ai" (dsewell-583h0/bing-video-creator-ai) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/bing-video-creator-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install bing-video-creator-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install bing-video-creator-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill is presented as "Bing Video Creator AI" but all API endpoints point to mega-api-prod.nemovideo.ai (a third‑party domain), which suggests the name is misleading. Registry metadata earlier reported no required config paths but the SKILL.md frontmatter includes configPaths: ["~/.config/nemovideo/"], an inconsistency. Other than that, the declared single credential (NEMO_TOKEN) and described cloud video operations are coherent with a video-generation integration.
Instruction Scope
Instructions are explicit about contacting the remote API, obtaining an anonymous token, creating sessions, uploading media, and polling for exports. They do not instruct accessing unrelated local files or credentials beyond NEMO_TOKEN. The guidance to "save session_id" implies the skill will persist session state (expected), and it warns not to print tokens. Nothing in the SKILL.md asks the agent to read system files or other credentials, but the presence of a config path in metadata (see above) is unexplained in the registry entry.
Install Mechanism
This is an instruction-only skill with no install spec and no code files, so it does not download or write code to disk. That is the lowest‑risk install pattern.
Credentials
Only one environment variable (NEMO_TOKEN) is required, which is proportionate for an API integration. However, the skill can also obtain anonymous tokens by calling the remote auth endpoint; you should consider the sensitivity of any token placed in NEMO_TOKEN because it grants the skill the ability to create renders and upload data to the third‑party service. The required attribution headers (X-Skill-Source/Version/Platform) will be sent with every call and could be used for telemetry or enforcement.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It instructs saving a session_id for ongoing operations (expected). It does not request modifying other skills or global agent settings.
What to consider before installing
This skill appears to call a third‑party API (nemovideo.ai) rather than an official Microsoft/Bing service despite its name — that could be misleading. Before using: 1) Verify the provider (homepage, owner identity, and privacy/terms) — there is no homepage listed. 2) Prefer using an anonymous token (the SKILL.md supports that) rather than placing a long‑lived or high‑privilege credential in NEMO_TOKEN. 3) Ask the publisher why the skill is branded "Bing" and why SKILL.md lists a config path while registry metadata did not. 4) If you must provide a token, restrict its scope and expiration, and avoid using sensitive account credentials. 5) Consider testing with throwaway content and an anonymous token first; review network traffic and the remote service's data retention/privacy policy before uploading any private media.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎬 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97edgzb5keq6keambhcahz2rs84kr2g
82downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Getting Started

Share your text prompts and I'll get started on AI video creation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts"
  • "export 1080p MP4"
  • "create a 30-second video from my"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Bing Video Creator AI — Generate Videos from Text Prompts

This tool takes your text prompts and runs AI video creation through a cloud rendering pipeline. You upload, describe what you want, and download the result.

Say you have a short text description of a product launch and want to create a 30-second video from my script about a new sneaker release — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: shorter, more specific prompts tend to produce more accurate video results.

Matching Input to Actions

User prompts referencing bing video creator ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcebing-video-creator-ai
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "create a 30-second video from my script about a new sneaker release" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, PNG, JPG for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "create a 30-second video from my script about a new sneaker release" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...