Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ListenHub

v0.6.0

Turn ideas into podcasts, explainer videos, voice narration, and AI images via ListenHub. Use when the user wants to "make a podcast", "create an explainer v...

0· 3.7k·3 current·3 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kkaticld/listenhub-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ListenHub" (kkaticld/listenhub-ai) from ClawHub.
Skill page: https://clawhub.ai/kkaticld/listenhub-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install listenhub-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install listenhub-ai
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The SKILL.md and user-facing text repeatedly point to listenhub.ai and instruct the user to set LISTENHUB_API_KEY from https://listenhub.ai. However the code calls API endpoints on other domains (API_BASE=https://api.marswave.ai/openapi/v1 in scripts/lib.sh and https://api.labnana.com in generate-image.sh) and references a third-party client id and GitHub repo (marswaveai). Registry metadata also lists no required env vars while SKILL.md and the scripts require LISTENHUB_API_KEY and optionally LISTENHUB_OUTPUT_DIR. The domains/auth headers used in requests do not align with the ListenHub domain named in the description — this is incoherent and could result in keys being sent to unexpected services.
!
Instruction Scope
The runtime instructions enforce using the provided scripts only, and those scripts do more than call a single API: they read, parse, and write shell rc files (~/.zshrc, ~/.bashrc, ~/.profile) to load or persist API keys and output path, perform a remote version check (curl to raw.githubusercontent.com), validate and accept arbitrary input files/URLs, and can poll long-running jobs. The scripts therefore access and modify user configuration outside the skill directory and perform network calls beyond the named ListenHub service.
!
Install Mechanism
There is no formal install spec, but generate-image.sh contains logic to auto-install missing dependencies (jq, curl) using system package managers (brew, apt-get, yum, dnf, pacman, choco, scoop) and will eval install commands. That behavior means running the scripts may execute privileged package-manager operations on your machine; the auto-install paths and remote version check also cause outgoing network requests to third-party hosts (GitHub, marswave/labnana APIs).
!
Credentials
SKILL.md and scripts require an API key (LISTENHUB_API_KEY) and optionally LISTENHUB_OUTPUT_DIR, but the registry metadata omitted required env vars — an inconsistency. More importantly, the key requested for 'ListenHub' will be used in requests to api.marswave.ai and api.labnana.com according to the scripts, meaning your key could be transmitted to domains that don't match the user-facing service name. Scripts also search and modify multiple shell rc files to read/write those values (persistence of secrets to disk).
!
Persistence & Privilege
The scripts will write/replace export lines in user shell rc files (e.g., append or sed-replace export LISTENHUB_API_KEY=...) and export values into the runtime environment. They also perform an automatic remote version check (network access). The skill is not 'always:true' so it won't be force-enabled globally, but it does make persistent changes to user configuration without a contained or sandboxed install step — this is a notable privilege and persistence behavior.
What to consider before installing
Do not install or run this skill without clarification. Things to check before proceeding: 1) Ask the publisher to explain why the code posts to api.marswave.ai and api.labnana.com while the SKILL.md says listenhub.ai; confirm which domain will receive your API key. 2) Back up and review your shell rc files — the scripts will write your API key into ~/.bashrc or ~/.zshrc; prefer setting LISTENHUB_API_KEY in a secure way rather than letting the script write it. 3) Avoid letting the scripts auto-install packages as root; run in a sandbox/container or on a throwaway VM first. 4) If you must test, run the scripts with network monitoring (or in an offline/simulated environment) to confirm endpoints and payloads, and validate that the API key is sent only to the intended ListenHub domain. 5) Prefer a vendor-signed skill or one whose install and endpoints match its documentation. Given the clear mismatches and persistence behavior, treat this skill as untrusted until the author provides a coherent explanation and updates the package metadata and code to align with the documented ListenHub endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b2z9ra8ayr9pme9s2chdpj58238q6
3.7kdownloads
0stars
1versions
Updated 9h ago
v0.6.0
MIT-0

ListenHub

Generate podcasts, explainer videos, TTS audio, and AI images through shell scripts that wrap the ListenHub API.

Setup

Set LISTENHUB_API_KEY before first use. Two options:

Option A — OpenClaw env config (recommended): Add to ~/.openclaw/openclaw.json under env:

{ "env": { "LISTENHUB_API_KEY": "lh_sk_..." } }

Option B — Shell export:

export LISTENHUB_API_KEY="lh_sk_..."

Get your key: https://listenhub.ai/settings/api-keys

For image generation, also set LISTENHUB_OUTPUT_DIR (defaults to ~/Downloads).

Script Location

All scripts live at scripts/ relative to this SKILL.md. Resolve the path:

SCRIPTS="$(cd "$(dirname "<path-to-this-SKILL.md>")" && pwd)/scripts"

Dependencies: curl, jq (install if missing).

Modes

ModeScriptUse Case
Podcastcreate-podcast.sh1-2 speaker discussion
Explainercreate-explainer.sh + generate-video.shNarration + AI visuals
TTScreate-tts.shPure voice reading
Speechcreate-speech.shMulti-speaker scripted audio
Imagegenerate-image.shAI image generation

Helper scripts: get-speakers.sh (list voices), check-status.sh (poll progress).

Hard Constraints

  • Execute ONLY through provided scripts. Direct API calls are forbidden.
  • Never hardcode speakerIds — call get-speakers.sh to discover them.
  • The API is proprietary; endpoints and parameters are internal to scripts.

Mode Detection

Auto-detect from user input:

  • Podcast: "podcast", "chat about", "discuss", "debate" → create-podcast.sh
  • Explainer: "explain", "introduce", "video", "tutorial" → create-explainer.sh
  • TTS: "read aloud", "convert to speech", "tts" → create-tts.sh
  • Image: "generate image", "draw", "create picture" → generate-image.sh

If ambiguous, ask user.

Quick Reference

Get Speakers

$SCRIPTS/get-speakers.sh --language zh   # or en

Returns JSON with data.items[].speakerId. If user doesn't specify a voice, pick the first match for the language.

Podcast (One-Stage, default)

$SCRIPTS/create-podcast.sh --query "topic" --language zh|en --mode quick|deep|debate --speakers <id1[,id2]> [--source-url URL] [--source-text TEXT]
  • quick is default mode. debate requires 2 speakers.
  • Multiple --source-url / --source-text allowed.

Podcast (Two-Stage: text → review → audio)

Use only when user wants to review/edit the script before audio generation.

Stage 1: $SCRIPTS/create-podcast-text.sh (same args as one-stage) Review: Poll with check-status.sh --wait, save draft, STOP and wait for user approval. Stage 2: $SCRIPTS/create-podcast-audio.sh --episode <id> [--scripts modified.json]

Explainer Video

$SCRIPTS/create-explainer.sh --content "text" --language zh|en --mode info|story --speakers <id>
$SCRIPTS/generate-video.sh --episode <id>

TTS (FlowSpeech)

$SCRIPTS/create-tts.sh --type text|url --content "text or URL" --language zh|en --mode smart|direct --speakers <id>
  • Default mode: direct (no content modification). smart fixes grammar/punctuation.
  • Text limit: 10,000 characters; use URL for longer content.

Multi-Speaker Speech

$SCRIPTS/create-speech.sh --scripts scripts.json

JSON format: {"scripts": [{"content": "...", "speakerId": "..."}]}

Image Generation

$SCRIPTS/generate-image.sh --prompt "description" [--size 1K|2K|4K] [--ratio 16:9|1:1|9:16|...] [--reference-images "url1,url2"]
  • Default: 2K, 16:9. Max 14 reference images.
  • Output saved to $LISTENHUB_OUTPUT_DIR (default ~/Downloads).

Check Status

$SCRIPTS/check-status.sh --episode <id> --type podcast|flow-speech|explainer [--wait] [--timeout 300]

Exit codes: 0=done, 1=failed, 2=timeout (retry safe).

Use --wait for automated polling. Run generation in background for long tasks.

Interaction Pattern

  1. Detect mode from user input
  2. If no speaker specified, call get-speakers.sh, pick first match
  3. Run the appropriate script (background for long tasks)
  4. Report submission, give estimated time (podcast 2-3min, explainer 3-5min, TTS 1-2min)
  5. On "done yet?" → run check-status.sh --wait
  6. Show result link. Offer download only when asked.

Language

Match response language to user input language. Chinese input → Chinese responses. English → English.

Links

Comments

Loading comments...