Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

MusicPlaylistGen

v1.0.3

Generate natural language playlists from your local music library using LLMs, accessible via web or API after indexing your music folder once.

0· 84·0 current·0 all-time
byJu-Chiang Wang@asriverwang
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description align with the code and instructions: the skill scans a local music folder, uses ffprobe for metadata, enriches tracks using LLMs (Anthropic or MiniMax), stores a SQLite DB, and serves a local web UI. Requesting Anthropic/MiniMax API keys and MUSIC_DIR is appropriate for the stated purpose.
!
Instruction Scope
The runtime instructions and code send extracted metadata and file paths to third‑party LLM endpoints for enrichment (expected for functionality), but the skill also loads and injects MUSIC_RULES.md into LLM system prompts. Editable rules become part of prompts — a potential vector for prompt injection and untrusted content influencing LLM behavior. The SKILL.md itself contains a pre-scan 'system-prompt-override' pattern. The indexer does not upload audio, only metadata/tags/paths, but that metadata and file paths may be sensitive. The server can be made reachable on a LAN/Tailscale address if the user sets MUSIC_SERVER_URL — that can expose links to others if misconfigured.
Install Mechanism
No packaged install spec; SKILL.md instructs cloning from GitHub and using a virtualenv + pip to install libraries (requests, openai, anthropic, mutagen). That is proportionate, but the repo source is 'unknown' in the registry metadata (owner id listed) — verify the GitHub repo/author before cloning. There are a couple of small code inconsistencies (different MiniMax base URLs in different files) suggesting sloppy maintenance but not necessarily malicious installs.
Credentials
Only expects MUSIC_DIR, ANTHROPIC_API_KEY and/or MINIMAX_API_KEY, plus optional PORT/MUSIC_SERVER_URL/DB_PATH — these are proportionate. The code reads .env into process environment. Providing LLM API keys is required for full functionality; if you supply keys they will be used to call external LLM endpoints with file metadata and prompts.
Persistence & Privilege
The skill does not request 'always: true' or other elevated skill-level privileges. It runs a local HTTP server and writes a persistent SQLite DB (music.db) containing metadata and LLM enrichments; this is expected behavior for the feature and does not appear to modify other skills or system-wide settings.
Scan Findings in Context
[system-prompt-override] expected: The skill deliberately constructs and injects system prompts (including contents of MUSIC_RULES.md) into LLM requests — this matches the feature (configurable LLM rules). However, editable rule injection is a prompt-injection surface; treat MUSIC_RULES.md as sensitive and do not load untrusted content into it.
What to consider before installing
This skill appears to implement the playlist/indexing functionality it advertises, but take these precautions before installing: - Verify the repository and author (git clone URL and commit history) before running code from an unknown source. The registry metadata shows an owner id but no homepage; confirm you trust the GitHub repo. - Understand what is sent to external services: the indexer sends file paths, metadata (tags), and constructed prompts to Anthropic or MiniMax. It does NOT upload audio files, but metadata and filenames can be sensitive. - If you are uncomfortable sending metadata to third-party LLMs, do not provide API keys. Consider running against a local/offline model or skipping the LLM enrichment step. - Review MUSIC_RULES.md and any rule changes carefully. That file is injected into system prompts; editing it can change LLM behavior and could be abused if populated with untrusted content. - When running the server, do not set MUSIC_SERVER_URL to a public IP unless you intentionally want network access. By default it listens on localhost:5678; exposing it to LAN/Tailscale will make links reachable by others. - Inspect the code (playlist_server.py, smart_indexer.py, start.sh) for any network calls and validate the endpoints. There are small inconsistencies in MiniMax endpoint strings; confirm the client libraries or HTTP calls point to the official provider domains before using your real API keys. - Run in a restricted environment (non-root user, limited network if possible) and backup the generated music.db if you value the indexing work. If you decide to proceed, supply API keys only temporarily and consider rotating keys later. If you want, I can: (1) produce a short checklist of exact files and lines to inspect for network calls, (2) highlight the parts of the code that construct prompts and what metadata they include, or (3) suggest safer configuration options (local LLM, binding to localhost, firewall rules).

Like a lobster shell, security has layers — review code before you run it.

latestvk970h1x1j9vhavyhyacjyyfx8583jw4k

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments