Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

autoglmasr

v0.0.1

AutoGLM ASR MCP 服务:长音频并发转录、上下文传递、时间戳分段。基于智谱 GLM-ASR-2512。触发词:语音识别、ASR、转录、转录音频、长音频

0· 348·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for isabellazhangym/autoglmasr.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "autoglmasr" (isabellazhangym/autoglmasr) from ClawHub.
Skill page: https://clawhub.ai/isabellazhangym/autoglmasr
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install isabellazhangym/autoglmasr

ClawHub CLI

Package manager switcher

npx clawhub@latest install autoglmasr
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill's stated purpose (long-audio ASR, chunking, concurrency, timestamps using GLM-ASR-2512) matches the instructions and examples (split on silence, concurrent HTTP calls to open.bigmodel.cn). However the registry metadata lists no required environment variables or binaries while the SKILL.md clearly requires ffmpeg and an AUTOGLM_ASR_API_KEY; that metadata mismatch is an incoherence.
!
Instruction Scope
The instructions tell the agent to read local audio files (absolute paths), run 'npx autoglm-asr-mcp' (which will fetch and execute code from npm at runtime), and use an API key to POST audio to https://open.bigmodel.cn. Reading local audio is expected for ASR, but the guidance to fetch/execute remote npm code and to use an API key (not declared in registry) increases risk and scope beyond what the registry claims.
!
Install Mechanism
There is no install spec in the registry, yet the SKILL.md relies on 'npx autoglm-asr-mcp' (dynamic download/execute from npm) and on installing ffmpeg. Dynamically pulling an npm package at runtime is an implicit install step not captured in metadata and has higher risk than a pure instruction-only skill.
Credentials
The environment variables listed in SKILL.md (AUTOGLM_ASR_API_KEY, API_BASE, model, concurrency, timeouts, etc.) are proportionate to an ASR client. But the registry declares none of these; importantly the API key will be sent to an external third-party (open.bigmodel.cn), which is a privacy and credential-exposure consideration the user should weigh.
Persistence & Privilege
The skill is not 'always: true', has no declared install hooks or config path modifications in the registry, and does not ask to modify other skills or system-wide agent settings. Autonomous invocation is allowed by default but is not in itself a new privilege here.
What to consider before installing
Before installing or invoking this skill: (1) Verify the upstream project and npm package (the SKILL.md references a GitHub repo and expects 'npx autoglm-asr-mcp'); confirm the package name, maintainers, and that the code is trustworthy. (2) Understand that you must provide AUTOGLM_ASR_API_KEY and that audio files will be uploaded to open.bigmodel.cn — avoid sending sensitive audio or use a scoped/test API key. (3) Expect runtime downloads via npx and that ffmpeg must be installed; consider running in an isolated or sandboxed environment. (4) If you require stricter control, prefer a self-hosted/local ASR or vendor-reviewed package, and update the skill registry metadata to declare required env vars/binaries before trusting it.

Like a lobster shell, security has layers — review code before you run it.

latestvk970jf2j5kwd0ye6jynte63091827kqg
348downloads
0stars
1versions
Updated 33m ago
v0.0.1
MIT-0

AutoGLM ASR MCP Server

GitHub: https://github.com/Starrylyn/autoglm-asr-mcp

一个面向 Agent 的语音转文字 MCP 服务,核心特性:

  • 长音频自动分块
  • 并发调用(可配置并发数)
  • 上下文传递模式
  • 时间戳分段输出

安装

# 前置依赖:ffmpeg
brew install ffmpeg  # macOS

# 运行 MCP 服务
npx autoglm-asr-mcp

MCP 配置

{
  "mcpServers": {
    "autoglm-asr": {
      "command": "npx",
      "args": ["-y", "autoglm-asr-mcp"],
      "env": {
        "AUTOGLM_ASR_API_KEY": "your-api-key"
      }
    }
  }
}

核心工具

transcribe_audio

参数类型必填默认值说明
audio_pathstring-音频文件绝对路径
context_modestringsliding上下文模式
max_concurrencyint5并发数 (1-20)

返回:

  • 完整转录文本
  • 时间戳分段列表
  • 运行统计(分块数、耗时、模式)

get_audio_info

获取音频文件信息(时长、格式、预估分块数)。


核心实现解析

1. 并发调用机制

# 使用 Semaphore 控制并发数
semaphore = asyncio.Semaphore(concurrency)

async def transcribe_with_semaphore(chunk: AudioChunk) -> None:
    async with semaphore:
        result = await self._transcribe_chunk(chunk, audio_format=audio_format)
        text_results[chunk.index] = result["text"]
        # ...

# 所有分块并行执行
tasks = [transcribe_with_semaphore(chunk) for chunk in non_silent_chunks]
await asyncio.gather(*tasks)

关键点:

  • Semaphore 限制最大并发数
  • asyncio.gather() 并行执行所有任务
  • 结果存入字典 text_results: dict[int, str],按分块索引排序

2. 上下文模式

模式速度质量说明
sliding第一个分块初始化上下文,后续并行
none最快各分块独立并行,无上下文传递
full_serial最佳顺序执行,完整上下文链

注意: 新版 /audio/transcriptions API 不需要上下文传递,所有分块默认并行。

3. 自动分块

chunks = split_audio_on_silence(
    audio,
    max_chunk_duration_ms=self.config.max_chunk_duration * 1000,  # 默认 25s
)
  • 按静音点分割音频
  • 每块最大 25 秒(可配置)
  • 静音块自动跳过

4. 静音检测 (VAD)

non_silent_chunks = [c for c in chunks if not c.is_silent]
skipped_silent = len(chunks) - len(non_silent_chunks)
  • 使用 VAD 检测静音片段
  • 静音块不调用 API,节省费用

5. 结果合并

# 按分块顺序合并文本
full_text = "".join(text_results.get(chunk.index, "") for chunk in chunks)

# 合并时间戳分段(偏移调整)
for seg in result["segments"]:
    offset_segments.append(TranscriptionSegment(
        start=seg.start + chunk.start_ms / 1000.0,  # 加上分块起始偏移
        end=seg.end + chunk.start_ms / 1000.0,
        text=seg.text,
    ))

环境变量

变量默认值说明
AUTOGLM_ASR_API_KEY必填智谱 API Key
AUTOGLM_ASR_API_BASEhttps://open.bigmodel.cn/api/paas/v4/audio/transcriptionsAPI 端点
AUTOGLM_ASR_MODELglm-asr-2512ASR 模型
AUTOGLM_ASR_MAX_CHUNK_DURATION25每块最大时长(秒)
AUTOGLM_ASR_MAX_CONCURRENCY5默认并发数
AUTOGLM_ASR_CONTEXT_MAX_CHARS2000最大上下文字数
AUTOGLM_ASR_REQUEST_TIMEOUT60请求超时(秒)
AUTOGLM_ASR_MAX_RETRIES2重试次数

支持的音频格式

mp3, wav, m4a, flac, ogg, webm


直接调用 API(不通过 MCP)

# 短音频
curl --request POST \
  --url https://open.bigmodel.cn/api/paas/v4/audio/transcriptions \
  --header 'Authorization: Bearer YOUR_API_KEY' \
  --form model=glm-asr-2512 \
  --form stream=false \
  --form file=@audio.wav

# 长音频:需要自己实现分块、并发、结果合并

最佳实践

  1. 短音频(<30s):直接调用 API
  2. 长音频:使用 MCP 服务,自动分块 + 并发
  3. 高质量需求:用 full_serial 模式
  4. 快速处理:用 none 模式 + 高并发(10-20)
  5. 平衡选择sliding 模式 + 并发 5(默认)

常见错误

错误原因解决
ffmpeg not found未安装 ffmpegbrew install ffmpeg
File not found路径错误使用绝对路径
AUTOGLM_ASR_API_KEY environment variable is required未设置 API Key在 MCP 配置中设置
transcriptions文件只支持单声道音频是立体声自动转换为单声道

关键代码片段(参考实现)

Python 异步并发调用示例

import asyncio
import httpx

async def transcribe_chunk(client, chunk_data, api_key):
    """转录单个音频块"""
    headers = {"Authorization": f"Bearer {api_key}"}
    files = {"file": ("audio.wav", chunk_data, "audio/wav")}
    data = {"model": "glm-asr-2512"}
    
    response = await client.post(
        "https://open.bigmodel.cn/api/paas/v4/audio/transcriptions",
        headers=headers,
        files=files,
        data=data,
    )
    result = response.json()
    return result.get("text", "")

async def transcribe_parallel(chunks, api_key, max_concurrency=5):
    """并发转录多个音频块"""
    semaphore = asyncio.Semaphore(max_concurrency)
    client = httpx.AsyncClient(timeout=60)
    results = {}
    
    async def limited_transcribe(chunk, index):
        async with semaphore:
            text = await transcribe_chunk(client, chunk, api_key)
            results[index] = text
    
    tasks = [limited_transcribe(chunk, i) for i, chunk in enumerate(chunks)]
    await asyncio.gather(*tasks)
    await client.aclose()
    
    # 按顺序合并
    return "".join(results.get(i, "") for i in range(len(chunks)))

扩展阅读

Comments

Loading comments...