Poetry Recitation

v1.0.0

Generate poetry recitation videos using a cloned voice or system voice with starry background and Chinese subtitles. Use when: (1) User asks to recite a poem...

0· 121·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zhangyanbo2007/poetry-recitation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Poetry Recitation" (zhangyanbo2007/poetry-recitation) from ClawHub.
Skill page: https://clawhub.ai/zhangyanbo2007/poetry-recitation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install poetry-recitation

ClawHub CLI

Package manager switcher

npx clawhub@latest install poetry-recitation
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: the SKILL.md and script create a video from poem text using a local TTS pipeline and local fonts. Required artifacts (voices.json, generate.py, fonts, and an output audio/video directory) are consistent with the stated goal.
Instruction Scope
Instructions limit behavior to synthesizing audio via a local TTS script and rendering a video with subtitles. The script runs a subprocess to call ~/.openclaw/skills/tts-gen-pipeline/scripts/generate.py and reads ~/.openclaw/skills/tts-gen-pipeline/scripts/voices.json and a system font; it writes media to ~/.openclaw/workspace/audio/. SKILL.md also instructs the agent to send the resulting video to the user via MEDIA path without extra confirmation — this automatic send behavior is expected but worth noting.
Install Mechanism
No install spec is present (instruction-only). The skill provides a Python script but does not download or install external code itself.
Credentials
The skill requests no environment variables or external credentials. It does, however, assume access to a local TTS pipeline, local cloned voice models, and a system font at a specific path — these are necessary for functionality but mean the skill will access local voice assets and write media files to the user's workspace directory.
Persistence & Privilege
always is false and the skill does not modify other skills or system-wide config. It runs on demand and has no elevated installation privileges.
Assessment
This skill appears to do what it says: it uses a local TTS pipeline to synthesize speech and renders a starry-background video with Chinese subtitles. Before using/installing: (1) Verify the referenced TTS pipeline (~/.openclaw/skills/tts-gen-pipeline/) and the generate.py script are from a trusted source — the skill invokes that script with subprocess.run, so a malicious generate.py could run arbitrary code. (2) Confirm the voices.json and cloned voice models you store there are intended for use (cloned voices can have legal/ethical implications). (3) Ensure the font path and output directory are correct and writable; the script will write files to ~/.openclaw/workspace/audio/. (4) If you want to review behavior, run the included script locally with known inputs before allowing automated agent invocation. No external network endpoints or secret-env requirements were found, but the trust boundary includes the local TTS pipeline the skill calls.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ejp4k569j5zaz81cwd8hct58575xz
121downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Poetry Recitation

Generate a video: voice recitation + starry background + timed Chinese subtitles.

Prerequisites

  • TTS pipeline at ~/.openclaw/skills/tts-gen-pipeline/ with at least one cloned voice
  • Dependencies: dashscope, websockets, moviepy, pillow, numpy
  • Font: NotoSerifCJK at /usr/share/fonts/opentype/noto/NotoSerifCJK-Regular.ttc

Quick Start

python3 scripts/poetry_recitation.py --poem "床前明月光\n疑是地上霜" --title "静夜思"
# or specify a voice:
python3 scripts/poetry_recitation.py --poem "..." --title "静夜思" --voice 章彦博

Arguments

ArgRequiredDescription
--poemYesPoem text, use \n for line breaks
--titleNoTitle displayed at top of video
--voiceNoVoice name: cloned voice (e.g. 章彦博) or system voice (cherry/serena/ethan/chelsie). Default: first cloned voice
--outputNoOutput video path (default: ~/workspace/audio/<title>_朗诵.mp4)

Available Voices

Cloned voices: Check with python3 ~/.openclaw/skills/tts-gen-pipeline/scripts/generate.py list-local

System voices: cherry (甜美女声), serena (温柔女声), ethan (沉稳男声), chelsie (磁性男声)

Workflow

  1. Accept poem text from user (and optional --voice preference)
  2. Run poetry_recitation.py with the poem
  3. Send the resulting video to user via MEDIA path — no extra confirmation needed

Voice Resolution

  • If --voice matches a local cloned voice name → uses cloned voice model
  • If --voice matches a system voice name → uses system voice model
  • If not specified → uses first available cloned voice

Output

Generated video (1920x1080, 24fps, H.264 + AAC) saved to ~/workspace/audio/.

Comments

Loading comments...