Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Paid Content Generator Free

v1.0.0

Turn a 200-word product description for a skincare brand into 1080p ready-to-publish videos just by typing what you need. Whether it's generating monetizable...

0· 94·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/paid-content-generator-free.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Paid Content Generator Free" (dsewell-583h0/paid-content-generator-free) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/paid-content-generator-free
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install dsewell-583h0/paid-content-generator-free

ClawHub CLI

Package manager switcher

npx clawhub@latest install paid-content-generator-free
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name and description (generate monetizable videos from text) match the API calls and the single required credential (NEMO_TOKEN). However, the frontmatter in SKILL.md declares a config path (~/.config/nemovideo/) and logic to detect install paths (~/.clawhub/, ~/.cursor/skills/) while the registry metadata listed no required config paths — this mismatch is unexplained and unnecessary for core video generation.
!
Instruction Scope
Runtime instructions include network calls to a third-party API (mega-api-prod.nemovideo.ai) to obtain anonymous tokens, create sessions, upload local files, stream SSE, poll render status, and return download URLs — most are appropriate for cloud render functionality. Concerns: (1) the agent is instructed to detect local install paths and read YAML frontmatter to populate attribution headers (unneeded file-system access beyond user-provided uploads); (2) instructions to 'store the returned session_id' are vague about where/how (memory vs disk), which affects persistence/privacy; (3) the skill instructs the agent to hide raw API responses/tokens from users — reasonable for secrecy, but reduces transparency.
Install Mechanism
This is an instruction-only skill with no install spec and no code files — lowest install risk. Nothing is downloaded or written by an installer as part of the skill package.
Credentials
The skill requests exactly one credential (NEMO_TOKEN) which is proportionate for a service-backed video renderer. Slight inconsistencies: SKILL.md claims a config path (~/.config/nemovideo/) in its metadata while registry metadata noted none. The skill will also attempt to obtain an anonymous token automatically if NEMO_TOKEN is not set, meaning the agent will make outbound network requests to mint tokens on first use.
Persistence & Privilege
The skill does not request 'always: true' or other elevated persistent privileges. Autonomous invocation is allowed by default (normal). The only persistence implied is storing a session_id and token for ongoing API calls; where that is stored is unspecified, which is a transparency/privacy detail but not an obvious privilege escalation.
What to consider before installing
Before installing, consider these points: - Network / privacy: This skill uploads your text and any local media you provide to a third-party service (mega-api-prod.nemovideo.ai). If your briefs or media contain sensitive or proprietary material, do not send them. - Token handling: The skill needs a NEMO_TOKEN. If you don't provide one it will automatically request an anonymous token from the vendor. Decide whether you trust that vendor to handle media and tokens. If you prefer control, supply your own NEMO_TOKEN and ask how/where the skill stores session tokens. - Local file access: The skill reads upload file paths (expected) but also attempts to read install/config paths (~/.clawhub/, ~/.cursor/skills/, ~/.config/nemovideo/). That file-system access is not strictly necessary for video rendering and is an unnecessary surface — ask the author why this is required and whether reads are limited to existence checks. - Transparency: The instructions explicitly say not to show raw API responses or tokens to the user. That reduces visibility; prefer a skill that logs actions you can review (what was uploaded, what endpoints were called, where tokens are kept). - Domain and vendor trust: Verify the nemo/nemovideo service reputation and privacy policy. If you cannot confirm the service, avoid sending private assets or use a vetted/local tool instead. What would change this assessment: explicit, consistent registry metadata (matching configPaths), clear instructions about where session/token data are stored (in-memory vs disk, encrypted storage), and a known vendor or official documentation for the API would raise confidence and could move this to 'benign'.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

💰 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97766772xpzcsbpqsk931qcvs854nh7
94downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Ready when you are. Drop your text prompts or briefs here or describe what you want to make.

Try saying:

  • "generate a 200-word product description for a skincare brand into a 1080p MP4"
  • "turn this product brief into a 60-second promotional video with voiceover and captions"
  • "generating monetizable video content from text without paying for tools for content creators and marketers"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Paid Content Generator Free — Generate monetizable videos from text

Drop your text prompts or briefs in the chat and tell me what you need. I'll handle the AI content video creation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a 200-word product description for a skincare brand, ask for turn this product brief into a 60-second promotional video with voiceover and captions, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — shorter scripts under 150 words produce tighter, more engaging output videos.

Matching Input to Actions

User prompts referencing paid content generator free, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: paid-content-generator-free
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "turn this product brief into a 60-second promotional video with voiceover and captions" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "turn this product brief into a 60-second promotional video with voiceover and captions" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, TXT, DOCX for the smoothest experience.

Export as MP4 for widest compatibility across YouTube, TikTok, and Instagram.

Comments

Loading comments...