Lyria

v1.0.0

Generate 30-second instrumental music via Google Lyria (Vertex AI). Use when user requests music generation, specific styles/keys/instruments, or music itera...

0· 258·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: the Python and shell scripts call Vertex AI Lyria endpoints, save WAV files, and support prompt iteration. Asking for a Google access token and project/location is appropriate for calling Vertex AI.
Instruction Scope
SKILL.md stays within the music-generation workflow (setup, obtain gcloud token, create config.json, generate files). It instructs creating files under ~/.openclaw/workspace/lyria and storing a bearer token in config.json (plaintext). The instructions require the agent or user to run gcloud auth flows and to refresh the bearer token manually — this is expected but should be done carefully because the token can grant broad Google API access.
Install Mechanism
There is no platform install spec (instruction-only), which reduces risk. The README suggests installing the Google Cloud SDK; the Linux install guidance uses curl https://sdk.cloud.google.com | bash (download-and-exec) which is common for gcloud but is higher-risk than a reviewed package manager step — users should verify the installer source and prefer package-managed installs when possible.
Credentials
The skill does not declare required environment variables in the registry metadata, but the runtime expects a config.json containing project_id, location, bearer_token, and output_dir. Requesting a Google bearer token is proportionate to calling Vertex AI, but the recommended method (personal gcloud access token written to a plaintext config file) can expose broad permissions; using a scoped service account or limiting token scopes and protecting the config file is advisable.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or global agent settings. It only creates/uses files under the user's workspace path (~/.openclaw/workspace/lyria) and writes generated audio there, which is consistent with its purpose.
Assessment
This skill appears to do what it says (generate short instrumental tracks via Google Lyria). Before installing or running it, consider: 1) Protect credentials: the workflow asks you to put a bearer token in ~/.openclaw/workspace/lyria/config.json — store this file with strict permissions (chmod 600) or prefer a service account with minimal scopes instead of a personal token. 2) Token scope: a gcloud access token may be usable for other Google APIs; create/choose credentials with limited permissions when possible. 3) Installer caution: the Linux install suggestion pipes a remote script (curl | bash); verify the source or use your OS package manager. 4) Metadata mismatch: the skill metadata lists no required env vars, yet runtime needs a config file with a bearer token; confirm you’re comfortable providing credentials via the config file. 5) Review files before running: the bundled scripts make network calls only to Google Vertex AI endpoints and write WAV files to the workspace, but always inspect third-party scripts before execution.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fd90hga8n976ys4p1k2kpm58295rq
258downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Music Generation Skill (Lyria)

Generate instrumental music from text prompts using Google Lyria API via Vertex AI.

Capabilities

  • Generate 30-second WAV instrumental tracks optimized for short-form content
  • Support style, mood, key, and instrument specifications
  • Save generated music to workspace with custom or timestamped filenames
  • Iterate based on user feedback

Use Cases

  • TikTok/YouTube Shorts — background music for 15-30s videos
  • Instagram Reels — quick musical intros/outros
  • Video transitions — short audio bridges
  • Loops — repeating segments for longer content

Limitations

  • 30 seconds max per generation (Lyria constraint)
  • Short-form only — designed for TikTok/Reels/Shorts, not full songs
  • Instrumental only (no vocals/lyrics)
  • Bearer token expires hourly (requires periodic refresh)
  • English prompts recommended

First-Time Setup

When using this skill for the first time on a machine, follow these steps:

Step 1: Create Directory Structure

# Create the lyria folder structure in workspace
mkdir -p ~/.openclaw/workspace/lyria/generated_music

Step 2: Install gcloud CLI (if not installed)

Check if gcloud is installed:

gcloud --version

If not installed, install Google Cloud SDK:

macOS (Homebrew):

brew install --cask google-cloud-sdk

Linux:

curl https://sdk.cloud.google.com | bash
exec -l $SHELL

Verify installation:

gcloud --version

Step 3: Authenticate with Google Cloud

gcloud auth login

Follow the browser prompt to authenticate.

Step 4: Get Your Configuration Values

Get project_id:

gcloud config get-value project

Get location:

gcloud config get-value compute/region

If no region is set, use: us-central1 (recommended for Lyria)

Get bearer_token:

gcloud auth print-access-token

⚠️ Note: This token expires in approximately 1 hour. You'll need to refresh it periodically.

Step 5: Create Config File

Create ~/.openclaw/workspace/lyria/config.json with your values:

{
  "project_id": "your-project-id-here",
  "location": "us-central1",
  "bearer_token": "ya29.a0AfH...your-token-here",
  "output_dir": "~/.openclaw/workspace/lyria/generated_music"
}

Replace the placeholders with your actual values from Step 4.


Usage

Quick Generate (Shell Wrapper)

The shell wrapper is the recommended way to generate music. It reads all configuration from the config file automatically.

./scripts/music-gen.sh "<prompt>" [name]

Arguments:

  • prompt (required): Text description of the music you want
  • name (optional): Custom filename (without extension). If omitted, uses timestamp.

Examples:

# With custom name:
./scripts/music-gen.sh "chill lo-fi piano C minor" "my_relaxing_track"
# Output: ~/.openclaw/workspace/lyria/generated_music/my_relaxing_track.wav

# With timestamp (default):
./scripts/music-gen.sh "energetic rock guitar solo"
# Output: ~/.openclaw/workspace/lyria/generated_music/music_20260302_143022.wav

Direct Python Usage

For more control or integration with other tools, call the Python script directly:

python3 scripts/music-gen.py <config_file> "<prompt>" [name]

Arguments:

  • config_file (required): Path to config.json
  • prompt (required): Text description of the music
  • name (optional): Custom filename (without extension)

Example:

python3 scripts/music-gen.py ~/.openclaw/workspace/lyria/config.json "jazz saxophone smooth" "evening_jazz"

Prompt Guidelines

Good prompts include:

  • Genre/Mood: "chill lo-fi", "energetic rock", "melancholic jazz", "epic orchestral"
  • Key: "C minor", "F major", "E Phrygian"
  • Instruments: "piano, strings, soft drums", "electric guitar, bass, drums"
  • Tempo/Feel: "slow and relaxing", "fast and driving", "mid-tempo groove"

Example prompts:

  • "A calm acoustic folk song in C minor with gentle guitar melody and soft strings, no drums"
  • "Upbeat electronic dance music with strong synth bass and driving beats, 128 BPM feel"
  • "Melancholic jazz piano in F minor with soft brush drums and upright bass"
  • "Epic cinematic orchestral with brass and strings, heroic and uplifting mood"

Refreshing Your Bearer Token

Since tokens expire hourly, you may need to refresh during long sessions:

  1. Get new token:

    gcloud auth print-access-token
    
  2. Update config.json with the new token

  3. Continue generating music


Workflow for Agents

When a user asks you to generate music:

  1. Check if setup is complete:

    • Verify ~/.openclaw/workspace/lyria/config.json exists
    • If not, guide user through First-Time Setup above
  2. Check if token is valid:

    • If generation fails with 401/403, token expired
    • Guide user to refresh token (see above)
  3. Refine the prompt:

    • Ask user for style, mood, instruments, key if not provided
    • Help craft a descriptive prompt
  4. Generate:

    ./scripts/music-gen.sh "<refined_prompt>" "<descriptive_name>"
    
  5. Deliver:

    • Send the generated .wav file to user
    • Confirm success
  6. Iterate (if needed):

    • Ask: "Want any changes? Faster? Different instruments?"
    • Update prompt, regenerate, deliver

Error Handling

ErrorCauseSolution
Config file not foundFirst-time setup incompleteFollow First-Time Setup steps
401 UnauthorizedBearer token expired or invalidRefresh token: gcloud auth print-access-token
403 ForbiddenNo access to Lyria APIEnable Vertex AI API in Google Cloud Console
404 Not FoundProject or location incorrectVerify project_id and location in config
No predictionsAPI issue or invalid promptRetry with modified prompt

File Structure

~/.openclaw/workspace/
└── lyria/
    ├── config.json              # User credentials and settings
    └── generated_music/         # All generated audio files
        ├── music_20260302_143022.wav
        ├── my_relaxing_track.wav
        └── ...

API Reference


Best Practices

  1. Use specific prompts — "jazz piano in F minor" works better than "jazz music"
  2. Specify what to exclude — "no drums" or "no vocals" helps steer output
  3. Generate variations — Create 2-3 versions and let user pick favorite
  4. Monitor costs — At $0.06 per 30s, 10 generations = $0.60
  5. Refresh token proactively — If session >45 min, get fresh token before generating

Example Complete Session

User: Generate some lo-fi music for studying

Agent: I'll create a chill lo-fi track for you. First, let me check your setup...
[Checks config exists - yes]
[Checks token validity - yes]

Generating: "Chill lo-fi hip hop with soft piano melody, warm vinyl crackle, relaxed beats, no vocals"
...
Generated: ~/.openclaw/workspace/lyria/generated_music/lofi_study_track.wav
[Sends file]

Done! 30 seconds of chill lo-fi. Want me to make it longer by generating a continuation, or try a different vibe?

Comments

Loading comments...