Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Infinite Talk Ai

v1.0.0

Get talking avatar videos ready to post, without touching a single slider. Upload your images or video (MP4, MOV, JPG, PNG, up to 200MB), say something like...

0· 59·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mhogan2013-9/infinite-talk-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Infinite Talk Ai" (mhogan2013-9/infinite-talk-ai) from ClawHub.
Skill page: https://clawhub.ai/mhogan2013-9/infinite-talk-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install infinite-talk-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install infinite-talk-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (talking avatar video generator) aligns with required credential NEMO_TOKEN and the listed API endpoints. However the SKILL.md frontmatter includes a configPaths entry (~/.config/nemovideo/) that the registry metadata did not list; this is an inconsistency worth clarifying (why would the skill need that path?).
Instruction Scope
Instructions stay within the scope of uploading media, starting sessions, streaming SSE, and exporting renders. They do instruct the agent to: read the skill's YAML frontmatter at runtime, detect install-path to set X-Skill-Platform, generate/store anonymous tokens if NEMO_TOKEN absent, and persist session_id for subsequent requests. Those actions are plausible for a cloud render skill but involve reading/writing local state and handling tokens (sensitive); the SKILL.md does not specify safe storage locations or retention policies.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. No downloads or external package installs are requested.
!
Credentials
Only NEMO_TOKEN is declared as required, which is proportional for this API. Concern: SKILL.md instructs creating and storing anonymous tokens and session IDs if NEMO_TOKEN is not present, and frontmatter references a user config path (~/.config/nemovideo/) that could be used to read/write credentials. The registry reported no required config paths; this mismatch should be explained. Confirm what is stored where and whether tokens persist beyond their stated 7-day lifetime.
Persistence & Privilege
always:false and normal autonomous invocation are used. The skill asks to persist a session_id and possibly token data, but it does not request system-wide privileges or modify other skills. No 'always:true' or other high privilege flags present.
What to consider before installing
This skill appears to do what it says (upload media, call nemovideo.ai endpoints, return a rendered MP4), but there are a few unclear details you should check before installing: 1) Confirm the config-path behavior — the SKILL.md references ~/.config/nemovideo/ but the registry didn’t list any config paths; ask the author whether the skill will read or write files there and what it stores. 2) Ask where session tokens and anonymous NEMO tokens are stored (file path, encryption, retention) and whether they can be revoked. 3) If you provide a long-lived or sensitive token as NEMO_TOKEN, prefer creating a least-privilege/isolated account for this service. 4) Verify the API domain (mega-api-prod.nemovideo.ai) is correct and trustworthy for your org. If these questions are not answered satisfactorily, treat the skill as risky because it handles and persists authorization tokens and local config.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🗣️ Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk973ncmdxfyeb6ygahcw2exft1852rck
59downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your images or video and I'll get started on AI talking avatar generation. Or just tell me what you're thinking.

Try saying:

  • "generate my images or video"
  • "export 1080p MP4"
  • "make this photo talk using my"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Infinite Talk AI — Generate looping talking avatar videos

Drop your images or video in the chat and tell me what you need. I'll handle the AI talking avatar generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a single portrait photo and a 60-second audio script, ask for make this photo talk using my voiceover and loop the animation infinitely, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — use a front-facing portrait with a clear face for the most realistic lip-sync results.

Matching Input to Actions

User prompts referencing infinite talk ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Base URL: https://mega-api-prod.nemovideo.ai

EndpointMethodPurpose
/api/tasks/me/with-session/nemo_agentPOSTStart a new editing session. Body: {"task_name":"project","language":"<lang>"}. Returns session_id.
/run_ssePOSTSend a user message. Body includes app_name, session_id, new_message. Stream response with Accept: text/event-stream. Timeout: 15 min.
/api/upload-video/nemo_agent/me/<sid>POSTUpload a file (multipart) or URL.
/api/credits/balance/simpleGETCheck remaining credits (available, frozen, total).
/api/state/nemo_agent/me/<sid>/latestGETFetch current timeline state (draft, video_infos, generated_media).
/api/render/proxy/lambdaPOSTStart export. Body: {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll status every 30s.

Accepted file types: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: infinite-talk-ai
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

All requests must include: Authorization: Bearer <NEMO_TOKEN>, X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "make this photo talk using my voiceover and loop the animation infinitely" — concrete instructions get better results.

Max file size is 200MB. Stick to MP4, MOV, JPG, PNG for the smoothest experience.

Export as MP4 for widest compatibility across social platforms and presentations.

Common Workflows

Quick edit: Upload → "make this photo talk using my voiceover and loop the animation infinitely" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...