Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Thread Generator

v1.0.0

Turn a 3-minute talking-head video into 1080p threaded video clips just by typing what you need. Whether it's splitting long videos into sequential thread-re...

0· 64·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dsewell-583h0/thread-generator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Thread Generator" (dsewell-583h0/thread-generator) from ClawHub.
Skill page: https://clawhub.ai/dsewell-583h0/thread-generator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install thread-generator

ClawHub CLI

Package manager switcher

npx clawhub@latest install thread-generator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill converts uploaded videos by calling a remote rendering API; requiring a service token (NEMO_TOKEN) is coherent with that purpose. No unrelated credentials or binaries are requested.
!
Instruction Scope
Instructions direct the agent to upload user video files and use an external API (mega-api-prod.nemovideo.ai), create or use an auth token, open sessions, stream SSE, poll state, and save session IDs. These actions are expected for a cloud render service, but the SKILL.md frontmatter also lists a local config path (~/.config/nemovideo/) while the registry metadata shows no required config paths and the instructions never explain reading or writing that path — an internal inconsistency that could indicate sloppy packaging or an undocumented attempt to access local config.
Install Mechanism
Instruction-only skill with no install spec and no code files — low install risk because nothing is written to disk by an installer. Runtime behavior relies on network calls to a third-party API.
Credentials
Only a single credential (NEMO_TOKEN) is required, which is proportionate for a service that needs a bearer token. The skill also documents how to obtain an anonymous token via the service API. Users should ensure any pre-set NEMO_TOKEN is scoped appropriately (avoid reusing a broad or privileged token).
Persistence & Privilege
always:false and no indications the skill modifies other skills or global agent settings. It stores short-lived session IDs and uses tokens for API calls, which is normal for a remote render workflow.
Scan Findings in Context
[no-regex-findings] expected: No code files were present for the regex scanner to analyze; this is expected for an instruction-only skill, but also reduces static-evidence about runtime behavior.
What to consider before installing
This skill appears to do what it says — it uploads your videos to a remote rendering service and returns processed clips — but take these precautions before installing or using it: - Only use a NEMO_TOKEN that you trust: avoid supplying a highly privileged or reusable token. Prefer creating a dedicated token/account for this service. - Understand that your videos will be uploaded to https://mega-api-prod.nemovideo.ai; do not upload sensitive or private footage unless you trust the vendor and its privacy policy. - The SKILL.md frontmatter mentions a local config path (~/.config/nemovideo/) but the registry metadata omitted this and the runtime steps don't explain using it — ask the author to clarify whether the skill will read/write local config files. - The skill auto-generates anonymous tokens via an API endpoint; anonymous tokens have limited credits and expiry but still allow remote processing of your data. - The publisher and homepage are unknown; if you need stronger guarantees, request a provenance link, official docs, or an audited package before proceeding. If the author can confirm the configPath usage and provide a homepage or privacy policy, that would increase confidence that the skill is safe to use for non-sensitive content.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧵 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk976xg14jvp5dy6ahjkg572j9185dsb3
64downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Getting Started

Share your video footage and I'll get started on thread content generation. Or just tell me what you're thinking.

Try saying:

  • "convert my video footage"
  • "export 1080p MP4"
  • "break this video into a Twitter"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: <uuid>. The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.

Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

Thread Generator — Convert Videos into Thread Clips

Send me your video footage and describe the result you want. The thread content generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a 3-minute talking-head video, type "break this video into a Twitter thread with key points and timestamps", and you'll get a 1080p MP4 back in roughly 30-60 seconds. All rendering happens server-side.

Worth noting: keeping each segment under 60 seconds improves thread engagement and processing speed.

Matching Input to Actions

User prompts referencing thread generator, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcethread-generator
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Common Workflows

Quick edit: Upload → "break this video into a Twitter thread with key points and timestamps" → Download MP4. Takes 30-60 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "break this video into a Twitter thread with key points and timestamps" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility across Twitter, LinkedIn, and Instagram.

Comments

Loading comments...