Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

podcast-highlights-deck

v1.0.0

Create a highly visual, editorial long-scroll HTML microsite from a podcast episode. Use when the user gives a podcast link (Apple Podcasts/Spotify/RSS/direc...

0· 87·0 current·0 all-time
byAnygen Selected Skill@ken-chy129

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ken-chy129/podcast-highlights-deck.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "podcast-highlights-deck" (ken-chy129/podcast-highlights-deck) from ClawHub.
Skill page: https://clawhub.ai/ken-chy129/podcast-highlights-deck
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install podcast-highlights-deck

ClawHub CLI

Package manager switcher

npx clawhub@latest install podcast-highlights-deck
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's purpose (create a static highlight site with per-clip audio) matches the included template and clipping script. However, the SKILL.md expects tools like ffmpeg, yt-dlp, Python, and speech-to-text tooling (anygen-speech-to-text or OpenAI Whisper). The registry metadata lists no required binaries or environment variables, so the manifest does not declare the real runtime needs — an incoherence.
!
Instruction Scope
Runtime instructions direct the agent to search the web, fetch RSS/episode audio, download audio (yt-dlp fallback), transcribe audio via third‑party speech-to-text tools, split audio, clip segments, translate text, and build a site. Those steps involve network downloads, writing audio and JSON to disk, and calling external services/APIs — all expected for the task but not explicitly scoped in the skill metadata (e.g., where/with what credentials to call 'anygen-speech-to-text' or 'whisper-1' is unspecified).
Install Mechanism
No install spec (instruction-only) — lower risk for hidden installers. The skill does include code files (Python script and a TypeScript template) which will be copied into a project. The Python script invokes ffmpeg via subprocess; the skill relies on external binaries but doesn't provide or declare them.
!
Credentials
The SKILL.md references use of external speech-to-text services (anygen-speech-to-text and OpenAI Whisper) which normally require API keys, but requires.env/primary credential fields are empty. The skill therefore implicitly expects credentials or platform-provided tools but doesn't declare them—this mismatch is a red flag because it obscures what secrets the agent will need to access.
Persistence & Privilege
The skill is user-invocable and not always-enabled. It does not request permanent presence or modify other skills/config. It writes files into a working/project directory (normal for a site generator) but does not request elevated agent/system privileges in the manifest.
What to consider before installing
What to check before installing: - Expect to provide or ensure availability of binaries: ffmpeg (used by the script), Python (to run scripts), and optionally yt-dlp (for YouTube audio). The manifest does not declare these — verify you have them and are comfortable the agent will call them. - Transcription: the workflow names 'anygen-speech-to-text' and OpenAI 'whisper-1' — both typically need API keys and send audio to external services. Decide whether you want to upload podcast audio (which may contain private or copyrighted content) to those services and ensure you provide keys securely if needed. - The skill will download audio and web pages (RSS, podcast pages, YouTube). Confirm you are allowed to download and reuse the source audio (copyright/legal considerations). - Template loads Google Fonts and uses import.meta.glob for local audio imports; generated site will reference external font hosts and include created mp3 files in the site assets. If you need fully offline builds or to avoid external third-party hosts, edit the template. - The clip script (scripts/clip_audio.py) runs ffmpeg via subprocess to write mp3 clips and updates highlights.json — review it (it is short and straightforward) and run it in a controlled workspace. - Because the skill doesn't declare required env vars or credentials, ask the publisher (or inspect SKILL.md/README) how transcription and translation are expected to be authenticated in your environment. If a platform injects model access automatically, verify their privacy and billing behavior. Summary recommendation: the skill appears to do what it claims, but the missing declarations about required system tools and API credentials are an important inconsistency. If you plan to use it, verify the presence of ffmpeg/python/yt-dlp and clarify how you will provide transcription/translation API keys and accept related privacy/licensing implications.

Like a lobster shell, security has layers — review code before you run it.

latestvk973tk5w7tyncn9svxyvg514r583h8aq
87downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Podcast Highlights Deck

(Internal skill id: podcast-highlights-deck)

What it creates

Create a premium editorial long-scroll highlight deck with a sticky TOC rail, multilingual toggle, and original audio clips.


Workflow

Inputs

  • podcast_url: episode page URL (Apple/Spotify/RSS/YouTube/direct MP3)
  • languages: list of language codes, e.g. en, ja, zh

Output

  • A static website (Vite build) with:
    • editorial hero (no full-bleed podcast artwork)
    • sticky left rail: metadata + language toggle + table of contents
    • 8–12 highlight sections
    • per-highlight audio clip playback (original audio)
    • global language switching (no mixed-language UI)

Workflow (execute in order)

1) Acquire audio (source-of-truth)

Prefer a direct audio URL (RSS <enclosure>). Recommended approach:

  1. Use search_web to find the show’s RSS feed (queries like: "<show name>" RSS feed, or the Apple show id + RSS).
  2. Use get_web_page_contents to fetch the RSS XML.
  3. Parse RSS to locate the exact episode and extract:
    • title
    • publish date
    • duration (if present)
    • cover image
    • enclosure mp3 URL

If RSS is unavailable:

  • If YouTube exists, use yt-dlp to download audio.
  • If a platform blocks direct audio access, ask the user for the RSS link or direct mp3.

Download audio to a working folder (example):

  • podcast_work/episode.mp3

2) Transcribe with timestamps

Primary:

  • Run anygen-speech-to-text episode.mp3 -o transcript -f json,md,srt.

Fallback (if the tool fails):

  • Split audio into chunks with ffmpeg (10 min chunks)
  • Use OpenAI Whisper (whisper-1) with response_format="verbose_json"
  • Merge segments by adding time offsets

You need a machine-readable file like:

  • transcript/episode_verbose.json containing segments with start, end, text

3) Curate 8–12 highlights (do NOT dump transcript)

Selection philosophy:

  • Prefer fewer, stronger highlights.
  • Only use quotes that exist in the transcript.

For each highlight, produce:

  • id (h1..h12)
  • start + end timestamps in seconds (from transcript)
  • title (translate later)
  • quote (English, exact or lightly cleaned)
  • context (1 sentence)
  • takeaway (editorial interpretation)

4) Translate + global UI copy

For every supported language:

  • Translate titles, context, takeaway, and quote (transcreation; keep meaning + tone).

Important behavior:

  • In non-English modes, show translated quote as primary.
  • Preserve a connection to English:
    • show “Original (English)” as a secondary expandable panel.

Also translate all UI strings:

  • hero framing
  • sidebar labels
  • buttons (“Play clip”, “Back to top”, etc.)
  • closing section labels

5) Clip original audio per highlight

Use the bundled script:

  • python scripts/clip_audio.py --audio episode.mp3 --highlights highlights.json --out-dir site_assets

Conventions:

  • add ~2s padding before/after for natural listening
  • output:
    • site_assets/audio/h1.mp3

6) Build the site with the bundled editorial template

Use website_init to create a new site project.

Then copy assets into the project:

  • src/assets/highlights.json
  • src/assets/cover.jpg
  • src/assets/audio/*.mp3

Then replace template files from this skill:

  • assets/template/Home.tsxsrc/pages/Home.tsx
  • assets/template/index.csssrc/index.css
  • assets/template/index.html → project index.html

Notes:

  • The template expects highlights.json schema similar to assets/template/highlights.schema.example.json.
  • Ensure document.documentElement.dataset.lang is set from the language toggle.

7) Bundle and deliver

Run website_bundle and deliver the generated dist/index.html.

Template assets in this skill

  • assets/template/Home.tsx: editorial layout + global language switching + expandable English original
  • assets/template/index.css: Swiss‑brutalist paper/ink theme + language font stacks
  • assets/template/index.html: Google Fonts includes Instrument Serif, Manrope, IBM Plex Mono, and Noto JP/SC
  • assets/template/highlights.schema.example.json: reference structure

Comments

Loading comments...