Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lobster Radio – Free Local AI Radio

v0.1.0

个性化资讯电台生成服务。使用场景:(1) 生成特定主题的电台,(2) 设置每日定时推送,(3) 配置TTS音色,(4) 收听历史电台。不适用:音乐播放、实时广播、视频内容。

1· 270·0 current·0 all-time
by坤涛/kunTao@jayden-x-l
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description match what the bundle contains: TTS providers, content generation, scripts to download Qwen3‑TTS models, audio management, and OpenClaw/LobsterAI integration. Requested permissions (fileSystem, network) and required binary (python3) are consistent with downloading models, saving audio, and integrating into the platform.
!
Instruction Scope
SKILL.md and included docs instruct the agent/operator to download large models from HuggingFace/ModelScope and run local Python code; they also recommend (in examples/docs) using trust_remote_code=True when loading models. The skill instructs reuse of a 'web-search' skill (via Python import or shell fallback) for news gathering — that increases scope and requires calling another skill. The skill also supports voice cloning from user audio samples (3s sample). These behaviors are within the stated purpose but broaden the runtime surface (network downloads, potential arbitrary code from model repos, and use of user audio), so they merit caution.
!
Install Mechanism
There is no formal package install spec, but the repo includes install scripts (scripts/install.sh) and instructions that will pip install requirements and run huggingface/modelscope downloads. Download sources are common (HuggingFace, ModelScope) which is expected for models, but the documentation demonstrates loading model repos with trust_remote_code (i.e., executing remote repo code). Automatic download + executing remote model code increases risk compared with pure dependency installation.
Credentials
The skill does not request secrets or cloud credentials and only declares python3 and platform permissions. This is appropriate for local model use. Documentation suggests optional HF mirror endpoint (HF_ENDPOINT) and use of huggingface/modelscope CLIs — these are benign for model retrieval but could require credentials for private repos; the skill does not request them explicitly.
Persistence & Privilege
always:false and the skill is user-invocable; it stores models, audio, and configuration on disk (MEMORY.md / SQLite). It also includes instructions to copy into the OpenClaw workspace and restart services — expected for a skill that writes files. No evidence it modifies other skills' configs. Because it writes files and may be integrated into a user's OpenClaw workspace, install-time isolation is recommended.
Scan Findings in Context
[huggingface_cli_download] expected: SKILL.md and many docs instruct downloading the Qwen3-TTS model from HuggingFace — this is expected for a local TTS skill.
[modelscope_snapshot_download] expected: ModelScope is suggested as an alternative mirror for domestic users; this is expected for model retrieval.
[trust_remote_code_usage] unexpected: Documentation shows using AutoModelForCausalLM.from_pretrained(..., trust_remote_code=True). Executing remote model repository code is sometimes necessary for custom model implementations but carries significant security risk (remote code execution). If the provider implementation enables trust_remote_code or executes downloaded model code, that should be reviewed and run only in an isolated/trusted environment.
[voice_cloning] expected: Skill and docs advertise '3s voice cloning'. This is a feature of the TTS model but is a privacy risk (ability to synthesize other people's voices) and should be used with consent.
What to consider before installing
What to check before installing/using this skill: 1) Review providers/qwen3_tts.py and scripts/install.sh before running them. Look for use of trust_remote_code or any code that executes downloaded files or shells out to run remote scripts. If trust_remote_code is enabled, prefer to disable it or run in a sandbox. 2) Run the install and model download in an isolated environment (VM, container, or dedicated machine). Model downloads will fetch files from HuggingFace/ModelScope and some model repos include custom Python code that will run when loaded. 3) Inspect and consider removing or modifying automatic install scripts if you cannot fully trust them. Prefer manual model retrieval from a verified model repo and manual dependency installation. 4) Be cautious with the voice‑cloning feature: it can synthesize voices from short samples. Only supply audio you own or have permission to use. 5) Limit network and filesystem permissions where possible during testing. The skill will store models (~5GB) and generated audio; ensure you are comfortable with those writes and with the skill being added to your OpenClaw workspace. 6) If you need stronger assurance, ask the maintainer for a minimal provider implementation that loads only vetted model code (no trust_remote_code), or use a prebuilt, signed wheel/binary from a trusted source. Reason for 'suspicious': the skill is functionally coherent, but the documented pattern of downloading third‑party model repos and enabling trust_remote_code (i.e., executing remote code) raises non-trivial security and privacy concerns that require operator review and mitigation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎙️ Clawdis
Binspython3
audiovk979hammfmhtgr3thf56944dzs82mbpdentertainmentvk979hammfmhtgr3thf56944dzs82mbpdlatestvk979hammfmhtgr3thf56944dzs82mbpdlocal-aivk979hammfmhtgr3thf56944dzs82mbpdqwenvk979hammfmhtgr3thf56944dzs82mbpdradiovk979hammfmhtgr3thf56944dzs82mbpdstreamingvk979hammfmhtgr3thf56944dzs82mbpdttsvk979hammfmhtgr3thf56944dzs82mbpd
270downloads
1stars
1versions
Updated 17h ago
v0.1.0
MIT-0

龙虾电台

个性化资讯电台生成服务,使用本地Qwen3-TTS模型,完全免费。

平台支持

  • OpenClaw - 通过SKILL.md集成
  • LobsterAI(有道龙虾) - 通过skill.json集成

When to Use

USE this skill when:

  • "生成关于人工智能的电台"
  • "每天早上8点推送科技新闻"
  • "配置我的电台音色"
  • "播放我的今日电台"
  • "查看历史电台"

DON'T use this skill when:

  • 播放音乐 → 使用音乐播放器
  • 实时广播 → 使用广播应用
  • 视频内容 → 使用视频平台

Prerequisites

安装Python依赖

pip install -r requirements.txt

下载Qwen3-TTS模型

注意: Qwen3-TTS模型不在Ollama公共仓库中,需要从HuggingFace或ModelScope下载。

方法一:从HuggingFace下载(推荐)

pip install huggingface_hub
huggingface-cli download Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice --local-dir ./models/Qwen3-TTS-12Hz-0.6B-Base

方法二:从ModelScope下载(国内用户推荐)

pip install modelscope
python -c "from modelscope import snapshot_download; snapshot_download('qwen/Qwen3-TTS-12Hz-0.6B-Base', cache_dir='./models')"

方法三:首次运行时自动下载

Skill会在首次生成电台时自动检测并下载模型:

python scripts/generate_radio.py --topics "人工智能"

验证安装

python tests/verify_all.py

Quick Start

生成电台

python scripts/generate_radio.py --topics "人工智能" --tags "科技"

配置TTS

python scripts/configure_tts.py --voice xiaoxiao

设置定时任务

OpenClaw:

# 每天早上8点推送科技电台
openclaw cron add \
  --name "每日科技电台" \
  --cron "0 8 * * *" \
  --session isolated \
  --message "生成科技新闻电台" \
  --announce \
  --channel feishu

LobsterAI: 在GUI中或通过对话创建定时任务:

"每天早上8点为我生成科技新闻电台"

Configuration

TTS配置

首次使用时,Skill会引导用户下载Qwen3-TTS模型。

OpenClaw: 配置存储在MEMORY.md中 LobsterAI: 配置存储在SQLite数据库中

用户偏好

用户偏好(订阅标签、常用音色等):

  • OpenClaw: 存储在MEMORY.md中
  • LobsterAI: 存储在持久化记忆中

Workflow

1. 生成电台

  1. 解析用户输入的主题/标签
  2. 调用平台LLM生成内容
  3. 调用Qwen3-TTS模型转换为音频
  4. 保存音频文件
  5. 返回音频链接和文本摘要

Cowork Mode(推荐)

在LobsterAI/OpenClaw的cowork mode下,可以利用平台主对话LLM生成内容,本Skill只负责TTS合成。

支持任意LLM模型:Claude、GPT-4、Qwen、Llama、Gemini等

优势:

  • ✅ 无需额外LLM API密钥
  • ✅ 利用平台LLM的强大能力(任意模型)
  • ✅ 内容生成更智能、更自然
  • ✅ 支持多模型切换对比

重要:获取新闻内容

生成电台前,需要先获取最新新闻内容。建议多次调用 web-search skill 获取不同主题的新闻:

建议:生成的新闻文稿控制在 200字以内,确保音频时长在1分钟内,适合移动端收听。

注意:根据 web-search skill 的文档,它可能不能直接通过 Python 导入使用。请按以下顺序尝试:

  1. 首先尝试 Python 导入使用from web_search import search
  2. 如果不行,尝试通过 bash 脚本调用bash scripts/web_search.sh "搜索关键词"
# 1. 首先,多次调用 web-search skill 获取不同主题的新闻
#    (可能需要3-5次搜索以覆盖不同主题)
#    按上述说明尝试 Python 导入或 bash 脚本调用

# 搜索结果示例:
# - 第一次搜索: "今日国际要闻"
# - 第二次搜索: "最新科技动态"  
# - 第三次搜索: "今日财经要闻"
# - 第四次搜索: "体育热点新闻"
# - 第五次搜索: "娱乐热点事件"

# 2. 平台主对话LLM整合搜索结果,生成连贯的电台内容
content = """
欢迎收听今日新闻摘要。

首先是国际要闻...(根据web-search结果生成)

接下来是科技动态...(根据web-search结果生成)

财经方面...(根据web-search结果生成)

体育要闻...(根据web-search结果生成)

娱乐热点...(根据web-search结果生成)

以上就是今天的新闻摘要,感谢收听。
"""

# 3. 调用Skill合成语音
audio_url = cowork_generate(
    title="今日新闻摘要",
    content=content,
    voice="xiaoxiao",  # 新闻适合女声
    emotion="neutral"
)

print(f"音频已生成: {audio_url}")

本Skill仅支持Cowork Mode

模式内容生成TTS合成需要API密钥支持模型
Cowork Mode平台LLM生成Skill合成无需额外API任意模型

注意:本Skill仅支持Cowork Mode,内容由平台主对话LLM生成,Skill只负责TTS语音合成。

2. 设置定时任务

OpenClaw:

  1. 解析用户的定时需求
  2. 调用OpenClaw的cron系统创建任务
  3. OpenClaw按时执行任务并推送

LobsterAI:

  1. 解析用户的定时需求
  2. 调用LobsterAI的scheduled-task skill创建任务
  3. LobsterAI按时执行任务并推送

3. 配置TTS

  1. 检查模型是否已下载
  2. 展示可用音色供选择
  3. 保存配置到平台存储

Available Voices

中文音色

  • xiaoxiao (晓晓): 女声,温柔,适合新闻播报
  • yunjian (云健): 男声,沉稳,适合财经资讯
  • xiaochen (晓辰): 女声,活泼,适合娱乐新闻

情感表达

  • neutral: 中性,适合新闻播报
  • happy: 开心,适合娱乐内容
  • sad: 悲伤,适合严肃话题
  • excited: 兴奋,适合科技突破

Examples

Example 1: 生成科技电台

User: "生成关于人工智能的电台"

Bot:

🎙️ 正在为您生成人工智能主题电台...

📝 内容摘要:
今天的人工智能领域发展迅速,OpenAI发布了最新的GPT-5模型...

🎧 [播放音频]
📥 [下载链接]

⏱️ 时长:3分45秒

Example 2: 设置定时推送

User: "每天早上8点推送科技新闻"

Bot:

✅ 定时任务创建成功!

⏰ 推送时间:每天早上8:00
📌 订阅标签:科技
📢 推送渠道:当前对话

明天早上8点将自动为您生成并推送电台!

Platform-Specific Notes

OpenClaw

  • LLM配置复用OpenClaw的配置
  • 定时任务使用OpenClaw的cron系统
  • 用户配置存储在OpenClaw的MEMORY.md中

LobsterAI

  • LLM配置复用LobsterAI的配置(Claude Agent SDK)
  • 定时任务使用LobsterAI的scheduled-task skill
  • 用户配置存储在LobsterAI的持久化记忆中
  • 支持通过IM(飞书、钉钉、Telegram)远程触发

Troubleshooting

模型未下载

错误: "模型未找到" 或 "模型下载失败"

解决方案:

# 方法1: 使用HuggingFace
pip install huggingface_hub
huggingface-cli download Qwen/Qwen3-TTS-12Hz-0.6B-CustomVoice --local-dir ./models/Qwen3-TTS-12Hz-0.6B-Base

# 方法2: 使用ModelScope(国内用户推荐)
pip install modelscope
python -c "from modelscope import snapshot_download; snapshot_download('qwen/Qwen3-TTS-12Hz-0.6B-Base', cache_dir='./models')"

音频生成失败

错误: "音频生成失败"

可能原因:

  1. 模型未正确加载
  2. 内存/显存不足
  3. 文本过长

解决方案:

# 检查模型状态
python tests/verify_all.py

# 使用CPU模式(如果显存不足)
# 在配置中设置 use_gpu=False

# 检查系统资源
htop

Performance

  • CPU推理: 1-2秒/100字
  • GPU推理: 0.5-1秒/100字
  • 内存占用: 约500MB
  • 模型大小: 约5GB

Cost

  • 完全免费
  • 无API调用费用
  • 无使用限制
  • 可离线使用

Support

如有问题,请访问:

Comments

Loading comments...