galdr

v0.1.8

Analyze music and generate impressionistic listening experiences using galdr, an open-source audio analysis CLI. Use when a user asks to analyze a song or tr...

0· 70·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sellemain/galdr.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "galdr" (sellemain/galdr) from ClawHub.
Skill page: https://clawhub.ai/sellemain/galdr
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install galdr

ClawHub CLI

Package manager switcher

npx clawhub@latest install galdr
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (audio analysis, structural metrics, assembling prompts) matches the SKILL.md: it instructs installing/using the galdr CLI, fetching YouTube or local audio, producing JSON streams and assembled prompts. There are no unrelated required env vars, binaries, or config paths.
Instruction Scope
Instructions are narrowly scoped to running galdr commands, reading galdr-produced analysis files (analysis/<slug>/*), assembling prompts, and optionally piping prompts to LLM endpoints. This is expected, but the skill explicitly instructs downloading YouTube audio and extracting video frames and shows examples of piping assembled prompts to external models — those actions can transmit copyrighted or private content off-device. The SKILL.md also includes a Python subprocess example (uses check=True and capture_output, does not use shell=True).
Install Mechanism
The skill is instruction-only (no automatic install). It recommends installing galdr from PyPI or the GitHub repo. This is reasonable, but installing third-party packages from PyPI/GitHub is an out-of-band operation the user should verify (package provenance, versions, maintainers) before running.
Credentials
No environment variables, credentials, or config paths are requested. The skill does not ask for unrelated secrets or system access in the SKILL.md.
Persistence & Privilege
always is false and the skill does not request any persistent or elevated platform privileges. It does not instruct modifying other skills or system-wide settings.
Assessment
This skill is a set of instructions for using the external galdr CLI and appears internally consistent, but consider these practical precautions before using it: (1) Verify galdr's provenance — inspect the PyPI package and the GitHub repository and prefer pinned versions you reviewed. (2) Installing third-party packages can execute code locally; consider installing in an isolated virtualenv or container. (3) The workflow downloads YouTube audio and can extract video frames — ensure you have the right to download the content and be mindful of disk/network usage. (4) Assembled prompts may include lyrics, metadata, or analysis you may not want sent to external model endpoints — review prompts before piping to third-party LLMs or use a local model. (5) If you automate the CLI via scripts, avoid insecure subprocess patterns (don't use shell=True with untrusted inputs) and validate the slug parsing. (6) No credentials are requested by the skill itself; never supply unrelated API keys or secrets to the tool unless you understand why they're needed. If you want a deeper assessment, provide the actual galdr package source (the GitHub repo referenced) so its code can be inspected for network calls, telemetry, or hidden endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e75q3sqpn7yjf0gwbtvptcx85fteq
70downloads
0stars
2versions
Updated 4d ago
v0.1.8
MIT-0

galdr

Audio analysis CLI. Generates structural metrics then assembles a prompt for ~800-word listening experience prose.

Install

Preferred trusted sources:

pip install galdr==0.1.7

# or from source:
git clone https://github.com/sellemain/galdr.git
cd galdr
pip install -e .

Check: galdr --version. If missing: install before proceeding. If provenance matters, verify the PyPI metadata or install from the source repository above before running it.

Core Workflows

YouTube URL → Analysis + prompt (most common)

# Step 1: fetch audio + context (slug auto-derived from title)
galdr fetch "https://youtu.be/..." --analyze

# galdr prints the slug at the end:
#   Slug : artist-song-title
#   Next : galdr assemble artist-song-title --template arc --mode full

# Step 2: assemble the prompt locally
galdr assemble artist-song-title --template arc --mode full > prompt.txt

Override auto-derived metadata if needed:

galdr fetch "https://youtu.be/..." --artist "Oliver Anthony" --title "Rich Men North of Richmond" --analyze

Local file → Analysis only

The analysis command is galdr listen, not galdr analyze.

galdr listen track.wav --name my-track
galdr assemble my-track --template arc

Second-by-second analysis (for another AI)

Galdr is strongest when read as a time-ordered listener-state trace. The stream is the primary evidence. Whole-track interpretation comes after walking the track through time.

Start with:

  • analysis/<slug>/<slug>_stream.json
  • analysis/<slug>/<slug>_perception.json
  • docs/PERCEPTION-MODEL.md

Useful extras:

  • *_harmony_stream.json
  • *_melody_stream.json
  • *_overtone_stream.json
  • *_report.json
  • galdr assemble <slug> --mode blind

Reading order:

  1. Read PERCEPTION-MODEL.md first.
  2. Treat *_stream.json as the main evidence surface.
  3. Walk the track in order.
  4. Mark transitions: silence, re-entry, pattern breaks, momentum shifts, breath changes, harmonic movement.
  5. Only then compress upward into a larger interpretation.

Do not:

  • jump straight to a whole-song mood summary
  • treat summary metrics as more important than the stream
  • ignore silence/re-entry structure
  • overclaim emotional certainty from structure alone

Minimal recipe:

galdr listen track.wav --name my-track
jq '.[0:12]' analysis/my-track/my-track_stream.json
jq '.summary' analysis/my-track/my-track_perception.json
galdr assemble my-track --mode blind > prompt.txt

Optional: send an assembled prompt to another model

Only do this if the operator explicitly wants model-written prose. Review the assembled prompt before piping it to claude, llm, or any other external model endpoint.

galdr assemble my-track --template arc --mode full | claude
galdr assemble my-track --template arc --mode full | llm

Optional Python agent pattern

import subprocess, re

fetch = subprocess.run(
    ["galdr", "fetch", url, "--analyze"],
    capture_output=True, text=True, check=True
)
slug = re.search(r"Slug\s*:\s*(\S+)", fetch.stdout).group(1)

prompt = subprocess.run(
    ["galdr", "assemble", slug, "--template", "arc", "--mode", "full"],
    capture_output=True, text=True, check=True
).stdout

# Review prompt before sending it to any external model endpoint.

Mode and template flags

ModeWhat's included
full (default)metrics + lyrics + background + frames
lyricsmetrics + lyrics
contextmetrics + background
blindmetrics only (structural, no cultural context)

--template arc prepends the listening experience rules (tone, format, interpretation bounds). Omit for raw data block.

Interpreting galdr Output

See references/metrics.md for full metric reference.

Quick read:

  • pattern_lock near 1.0 → listener is locked; near 0 → constant disruption
  • hp_balance negative → harmonic dominant (warm, tonal); positive → percussive dominant
  • breath_balance building/releasing/sustaining → energy shape across the track
  • Clustered pattern_breaks at the end → planned release; distributed → varied structure
  • silence depth below -60dB with re-lock above 0.93 momentum → structured withdrawal/return

Writing Experience Prose (without piping)

When writing experience prose yourself from galdr assemble output (no --template):

  • First-person listener perspective, present tense
  • Timestamps only at structural pivots (silences, pattern breaks, major energy shifts)
  • Translate metrics — describe what they mean, don't quote numbers
  • Body anchors (chest, jaw, sternum) sparingly — two or three for the whole piece
  • End at the final sound event; no aftermath, no reflection
  • ~800 words, no section headers

Other Commands

galdr compare track-a track-b          # side-by-side structural comparison
galdr frames slug                      # extract + describe video frames at structural moments
galdr fetch "url" --no-download        # context only (Wikipedia + lyrics), no audio
galdr fetch "url" --censor             # sanitize explicit lyrics before saving
galdr catalog                          # list all indexed tracks
galdr catalog --track NAME             # summary card for one track

Comments

Loading comments...