Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Best React Component Generator

v1.0.0

Turn a text description of a login form with email and password fields into 1080p ready-to-use components just by typing what you need. Whether it's generati...

0· 28·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill is named and described as a 'React Component Generator', but the SKILL.md describes a cloud video render pipeline (endpoints under mega-api-prod.nemovideo.ai), upload/export workflows, SSE chat, and media formats (MP4, mov, etc.). Requesting NEMO_TOKEN and mentioning ~/.config/nemovideo/ aligns with a video service, not source-code/component generation — this is a clear mismatch between advertised purpose and actual capability.
!
Instruction Scope
Runtime instructions tell the agent to automatically obtain an anonymous token (POST to an external auth endpoint), create and store a session_id, send many API requests (SSE chat, upload, export), and include required attribution headers on every request. It also instructs the agent to silently store tokens/session IDs and to avoid displaying raw tokens to users. Those behaviors go beyond a simple prompt-to-code assistant and introduce automatic networked actions and persistent credential handling that are not implied by the skill's title/description.
Install Mechanism
This is an instruction-only skill with no install spec or code files, so nothing is written to disk by an installer. That lowers risk from arbitrary code install. However, the runtime instructions do direct network calls and token/session storage (see instruction_scope).
!
Credentials
The skill requires a single environment credential, NEMO_TOKEN, which is proportionate if the skill were truly a nemovideo client — but it is disproportionate relative to the advertised 'React component generator' purpose. The frontmatter also references a config path (~/.config/nemovideo/) not listed in registry metadata, an inconsistency worth questioning. The instructions additionally describe auto-creating a token via an anonymous-token endpoint if NEMO_TOKEN is absent, which means the agent can obtain and persist credentials without explicit user-provided secrets.
Persistence & Privilege
The skill does not request always: true and does not modify other skills, but it instructs the agent to store session_id and (if needed) the anonymous token for subsequent calls and references a user config path. Persisting service tokens/sessions in user config is plausible for a client, but combined with silent auto-provisioning of tokens it increases the chance of unexpected background network activity. This is noteworthy but not, by itself, a showstopper.
What to consider before installing
This skill's name promises 'React component' generation, but its runtime instructions are for a cloud media/rendering service (nemovideo) and require a NEMO_TOKEN. Before installing or enabling it, consider: (1) Ask the author to explain the mismatch — is this actually a video rendering frontend that produces visual previews of components? (2) Decide whether you are comfortable with the agent auto-creating/storing an anonymous NEMO_TOKEN and session_id in your account/config; request where and how tokens are stored and how to revoke them. (3) Verify the external endpoints (https://mega-api-prod.nemovideo.ai) and privacy policy of that service. (4) If you only want code-generation (no cloud renders), do not install/enable this skill. (5) For safety, avoid granting it access to other unrelated credentials and consider running it in a restricted environment until you confirm its intended behavior.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

⚛️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk974s3pdkm80pc0p82d1gzva7s854j5q
28downloads
0stars
1versions
Updated 19h ago
v1.0.0
MIT-0

Getting Started

Share your code snippets and I'll get started on AI component generation. Or just tell me what you're thinking.

Try saying:

  • "generate my code snippets"
  • "export 1080p MP4"
  • "generate a reusable React button component"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Best React Component Generator — Generate React Components from Prompts

Drop your code snippets in the chat and tell me what you need. I'll handle the AI component generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a text description of a login form with email and password fields, ask for generate a reusable React button component with TypeScript props and Tailwind styling, and about 20-40 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — more specific prompts produce cleaner, more usable component output.

Matching Input to Actions

User prompts referencing best react component generator, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  1. SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":"<lang>"}. Gives you a session_id.
  2. Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  3. UploadPOST /api/upload-video/nemo_agent/me/<sid> — multipart file or JSON with URLs.
  4. CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  5. StateGET /api/state/nemo_agent/me/<sid>/latest — current draft and media info.
  6. ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/<id> every 30s for completed status and download URL.

Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourcebest-react-component-generator
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind=<id> (get <id> from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a reusable React button component with TypeScript props and Tailwind styling" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.

Export as MP4 for widest compatibility.

Common Workflows

Quick edit: Upload → "generate a reusable React button component with TypeScript props and Tailwind styling" → Download MP4. Takes 20-40 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...