ASMR Sleep Sound Generator | ASMR助眠音生成器
PassAudited by ClawScan on May 7, 2026.
Overview
This appears to be a low-risk ASMR sound tool, but users should verify the missing local project files and be cautious if entering an LLM API key.
This skill looks benign for a local ASMR audio app. Before installing or using it, confirm where the actual project files come from, because they are not included in the reviewed package. If you enable LLM control, use a limited API key and avoid entering sensitive personal information.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The reviewed package does not contain the app implementation, so any files you later run from that workspace path are outside this review.
The skill references project documentation and local project files that are not included in the supplied file manifest, which contains only SKILL.md and _meta.json.
详见 `docs/orchestration-design.md` ... 详见 `~/.openclaw/workspace/projects/asmr-sleep-sound-generator/README.md`
Before running the local server, verify that the project files came from a trusted source and inspect them if they are created or downloaded separately.
An API key stored in browser localStorage can remain available after the session and could be exposed to code running on the same local site origin.
The skill discloses that users may enter an LLM API endpoint/key and that it will be persisted in browser localStorage.
用户自行配置 LLM API endpoint/key,存储在 localStorage
Use a restricted or disposable API key, only serve trusted app files, and clear browser localStorage if you no longer want the key retained.
Text you type for natural-language control may be processed by an external or local LLM endpoint depending on your configuration.
Natural-language control implies sending user descriptions to a user-configured LLM API provider or endpoint.
自然语言控制:通过 LLM API 解析用户描述,自动编排并播放
Avoid entering private information in natural-language prompts and use only LLM endpoints whose privacy and retention practices you trust.
