HN Podcast Archive

v1.0.0

Automate podcast archiving by detecting new HN episodes from RSS, downloading audio, transcribing locally with Whisper, and generating markdown archives with...

0· 98·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for terrycarter1985/hn-podcast-archive.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "HN Podcast Archive" (terrycarter1985/hn-podcast-archive) from ClawHub.
Skill page: https://clawhub.ai/terrycarter1985/hn-podcast-archive
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install hn-podcast-archive

ClawHub CLI

Package manager switcher

npx clawhub@latest install hn-podcast-archive
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (archive HN podcast episodes, download audio, transcribe with Whisper, write markdown) match the included files and declared runtime expectations. The script requires feedparser, ffmpeg, and a 'whisper' CLI which are appropriate for this task.
Instruction Scope
SKILL.md and references document only RSS fetching, downloading audio, local transcription, writing files (audio/, transcripts/, episodes/, state.json, run-log.jsonl, index.md), and scheduling. The script reads/writes only under the specified output directory and does not access unrelated system paths, environment variables, or external endpoints other than fetching RSS and episode audio.
Install Mechanism
There is no install spec (instruction-only), which is low-risk. The code expects external binaries ('ffmpeg' and 'whisper') and the Python feedparser package; these are reasonable but require the operator to install and vet. The 'whisper' CLI is invoked via subprocess — ensure the binary on PATH is the intended transcription tool (the script will execute whatever 'whisper' refers to).
Credentials
The skill requests no environment variables, credentials, or config paths. The script operates on a provided output directory and does network fetches for the RSS and audio files only, which is proportionate to the stated purpose.
Persistence & Privilege
Flags show no forced permanence (always:false) and no modifications to other skills or system-wide settings. The skill writes only to its own output directory and state/log files as described.
Assessment
This skill appears coherent for archiving/transcribing podcasts, but take these practical precautions before installing or scheduling it: 1) Verify and install 'whisper' and 'ffmpeg' from trusted sources — the script will run the 'whisper' binary found on PATH, so a malicious binary with that name would be executed. 2) Run the script manually with --dry-run and a test feed and output directory to confirm behavior before scheduling. 3) Use a dedicated output directory (not a system or home root) and consider an isolated environment (virtualenv, container) for Python deps. 4) Inspect and trust the RSS feed sources you give the script; it will download and store audio from those URLs. 5) Pin feedparser and any other runtime components as you deploy. If you want higher assurance, request an install spec or signed release for the whisper/ffmpeg binaries you plan to use.

Like a lobster shell, security has layers — review code before you run it.

latestvk971j9t9azp2fy101k1anm36w584e6dd
98downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

HN Podcast Archive

Set up or maintain a repeatable pipeline that:

  1. reads an RSS feed,
  2. detects new episodes,
  3. downloads audio,
  4. transcribes with local Whisper,
  5. writes a markdown archive per episode,
  6. updates index/state files.

Workflow

  1. Read references/layout.md to understand the expected archive layout and outputs.
  2. Use scripts/hn_podcast_archive.py as the primary implementation.
  3. Run python3 scripts/hn_podcast_archive.py --help to inspect options.
  4. For first-time setup, ensure required binaries and Python modules exist.
  5. For automation, schedule the script on a recurring cadence with a stable output directory.

Required runtime dependencies

The script expects:

  • ffmpeg in PATH
  • whisper in PATH
  • Python 3.10+
  • Python package feedparser

If any dependency is missing, surface a clear setup note instead of pretending the pipeline is ready to execute.

Recommended command

python3 skills/hn-podcast-archive/scripts/hn_podcast_archive.py \
  --feed-url "https://example.com/podcast.rss" \
  --output-dir ./data/hn-podcast-archive \
  --whisper-model turbo

Output expectations

For each ingested episode, create:

  • downloaded audio under audio/
  • transcript under transcripts/
  • markdown archive under episodes/

Keep these shared files current:

  • index.md
  • state.json
  • run-log.jsonl

Automation guidance

For automation, prefer a cron/standing-order style trigger that runs every few hours. The script is idempotent at the episode level by tracking processed GUIDs/URLs in state.json.

Safe operating rules

  • Never overwrite unrelated archive content.
  • Skip already-processed episodes unless explicitly forced.
  • Preserve source metadata (title, published date, audio URL, guid).
  • If transcription fails after download, keep the audio and record the failure in the log/state.

Customization points

Useful flags:

  • --limit N to ingest only recent items during testing
  • --force to reprocess already-seen items
  • --dry-run to inspect actions without writing outputs
  • --whisper-model to trade speed vs accuracy

Packaging/publishing

Package the skill from its folder. Publish with ClawHub only after local validation passes and authentication is available.

Comments

Loading comments...