Install
openclaw skills install midi-music-composerUse when the user asks to write, compose, make, or generate a song as an actual MIDI/instrumental artifact rather than lyrics or a Suno prompt. Creates original MIDI songs, melodies, chord progressions, website music, game music, ambient loops, and short title-based compositions.
openclaw skills install midi-music-composerCreate a short original MIDI composition from a title. The skill turns the title into a stable musical world: genre, tempo, time signature, key, chord progression, instruments, sectional form, and one main melody owner.
Use the bundled generator for candidate artifacts, select the best-scoring candidate, then revise only within the same musical world unless the user asks for a different direction. When the user wants to improve taste, compare versions, or help the critic learn, use blind audition mode.
This skill is for producing a .mid file and manifest, not for writing lyrics or prompts for external music generators.
Do not use this skill for full audio model generation, lyric writing, playlist curation, or music theory explanation unless the user also wants a generated MIDI composition.
If the user only says "write me a song" and does not specify lyrics vs MIDI, ask one short clarification: "Do you want lyrics, or should I generate a MIDI instrumental?" If this skill was explicitly invoked with /music-composer, assume MIDI and proceed.
When making a song from scratch:
scripts/generate_song.py.scripts/critique_song.py.main_melodyWhat did you think: 1-5? And should the next one be stranger, simpler, more emotional, or more rhythmic?
scripts/record_preference.py using the manifest path from the most recent generated song.Use blind audition mode when the user wants to compare versions, rate candidates, calibrate the critic, or make the skill learn their taste.
python3 "${HERMES_SKILL_DIR}/scripts/generate_audition.py" "Title Goes Here" --out ./out --candidates 4 --preferences ~/.hermes/music-composer-preferences.json --render-audio
I made 4 blind versions of <title>. Listen in any order and rate each 1-5.
A: <midi path or audio path>
B: <midi path or audio path>
C: <midi path or audio path>
D: <midi path or audio path>
Send ratings like: A=4, B=2, C=5, D=3
python3 "${HERMES_SKILL_DIR}/scripts/record_audition.py" "<audition_json_path>" --ratings "A=4,B=2,C=5,D=3" --opinion "<optional user notes>"
Generate and select the best of three candidates:
python3 "${HERMES_SKILL_DIR}/scripts/generate_song.py" "Title Goes Here" --out ./out --candidates 3
If ${HERMES_SKILL_DIR} is unavailable, first locate this skill directory under ~/.hermes/skills/music-composer.
The script writes:
.mid file.json manifest describing title, genre, time signature, key, form, chords, instruments, duration, and melody ownership.composition.json intermediate composition plan--candidates is greater than 1Critique the result:
python3 "${HERMES_SKILL_DIR}/scripts/critique_song.py" ./out/title-goes-here.json
Try to render WAV audio if local tools are installed:
python3 "${HERMES_SKILL_DIR}/scripts/render_audio.py" ./out/title-goes-here.mid
Record user feedback as preference memory:
python3 "${HERMES_SKILL_DIR}/scripts/record_preference.py" ./out/title-goes-here.json --opinion "4/5, liked the coda but the chords felt flat"
Generate a blind audition set:
python3 "${HERMES_SKILL_DIR}/scripts/generate_audition.py" "Title Goes Here" --out ./out --candidates 4 --preferences ~/.hermes/music-composer-preferences.json --render-audio
Record blind audition ratings:
python3 "${HERMES_SKILL_DIR}/scripts/record_audition.py" ./out/title-goes-here-audition-*/audition.json --ratings "A=4,B=2,C=5,D=3" --opinion "C had the best ending and stranger chords"
Use preference memory in future generation:
python3 "${HERMES_SKILL_DIR}/scripts/generate_song.py" "New Title" --out ./out --candidates 5 --preferences ~/.hermes/music-composer-preferences.json
For deeper composing guidance, read references/composer-protocol.md. For compact examples of good title-to-genre mappings, read references/song-recipes.json.
Use the repo-level research harness when the user asks to improve, optimize, evaluate, benchmark, or experiment with the music-composer skill itself.
Run a fixed evaluation:
python3 research/run_experiment.py --candidates 6 --label "experiment-name"
Create or refresh the current baseline:
python3 research/run_experiment.py --candidates 6 --label "baseline" --write-baseline
The harness writes:
research/runs/<run-id>/report.jsonresearch/experiments.jsonlresearch/baselines/current.jsonWhen using research mode, make one focused change at a time, run the harness, compare against baseline, and keep changes only if the aggregate score improves without validation failures. Read research/program.md before making research-driven edits.
main_melody_owner..mid path and manifest summary.After every delivered song, keep track of the manifest path in the conversation. Ask the user for a brief opinion:
What did you think: 1-5? And should the next one be stranger, simpler, more emotional, or more rhythmic?
When the user responds, call:
python3 "${HERMES_SKILL_DIR}/scripts/record_preference.py" "<last_manifest_path>" --opinion "<user feedback>"
Then acknowledge what was learned in one sentence. Future generations should include:
--preferences ~/.hermes/music-composer-preferences.json
For blind auditions, keep track of the audition.json path. After the user gives ratings, call record_audition.py. If the critic disagreed with the user's winner, say so plainly and treat that as calibration data rather than an error.
When delivering a generated song, use this compact shape:
Made it: <title>
MIDI: <path>
Manifest: <path>
Composition plan: <path>
Selected candidate <n>/<count>: <genre>, <time signature>, <key>, <tempo> BPM
Harmony: <harmonic_strategy> using <chords>
Lead: <main_melody_owner>
Why this one: <one short reason from score/critic/candidate table>
What did you think: 1-5? And should the next one be stranger, simpler, more emotional, or more rhythmic?
Do not dump the full manifest unless the user asks. Do not ask multiple follow-up questions after delivery; keep feedback collection easy.