Funasr Transcribe Skill

v1.0.1

Use when the user needs local speech-to-text transcription for audio files, especially Chinese or mixed Chinese-English audio, without relying on cloud trans...

0· 706·0 current·0 all-time
bylimbo@limboinf
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description promise (local ASR for Chinese/mixed audio) matches the included scripts and Python code: they create a venv, install funasr/torch/modelscope, load FunASR models, transcribe a provided audio file, print and write a sibling .txt. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md and scripts only instruct creating a venv, installing Python deps, loading models, reading the specified audio file, and writing a .txt alongside it. There are no instructions to read arbitrary host files, access unrelated env vars, or send audio/text to third-party endpoints beyond model/package hosting used during install/first-run.
Install Mechanism
Installation is a shell script that uses pip to install packages from a Tsinghua PyPI mirror and leaves model downloads to first-run. This is an expected method for Python-based local inference but does require network access and installs heavyweight packages (torch). No arbitrary binary downloads or obscure URLs are used, but model hosting (ModelScope/Hugging Face) will perform further downloads on first run.
Credentials
The skill requests no credentials or special environment variables; scripts only read HOME to locate ~/.openclaw/workspace/funasr_env. No secrets or unrelated service keys are required.
Persistence & Privilege
The skill is not marked always:true and does not modify other skills or system-wide settings. It writes a virtual environment and cached models into the user's workspace (~/.openclaw/workspace/funasr_env) and writes transcript files next to source audio — this is proportionate for a local transcription skill.
Assessment
This skill appears to do what it claims: it will create a Python venv, pip-install funasr/torch/modelscope (from the Tsinghua PyPI mirror), and download models on first run. Before installing, consider: (1) you must allow network access for package and model downloads; (2) large packages and model files can consume disk space and take time; (3) the script uses the Tsinghua PyPI mirror — only proceed if you trust that mirror and the upstream model providers (ModelScope/Hugging Face); (4) the skill does not request credentials or exfiltrate data, but if you prefer, run install.sh and transcribe.sh manually in an isolated environment (container or VM) to review behavior first; (5) if you need offline/no-network guarantees, do not install or run the scripts until models are pre-provisioned locally.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎙️ Clawdis
latestvk976fg53zterm9qs8v2wf2q75h82vrq5
706downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

FunASR Transcribe

Local speech-to-text for audio files using FunASR. It is best suited to Chinese and mixed Chinese-English audio, runs on the local machine, and does not require a paid transcription API.

When to Use

  • The user wants to transcribe .wav, .ogg, .mp3, .flac, or .m4a files into text.
  • The user prefers local ASR over cloud speech APIs for privacy, cost, or offline-friendly workflows.
  • The audio is primarily Chinese, dialect-heavy Chinese, or mixed Chinese-English.
  • The user is okay with installing Python dependencies and downloading models on first use.

Do not use this skill when the user explicitly forbids local dependency installation or any network access for dependency/model download.

Quick Start

# Install dependencies and create a virtual environment
bash ~/.openclaw/workspace/skills/funasr-transcribe/scripts/install.sh

# Transcribe an audio file
bash ~/.openclaw/workspace/skills/funasr-transcribe/scripts/transcribe.sh /path/to/audio.ogg

What It Does

  • Creates a Python virtual environment at ~/.openclaw/workspace/funasr_env by default.
  • Installs funasr, torch, torchaudio, modelscope, and related dependencies.
  • Loads FunASR models locally and writes the transcript to a sibling .txt file.
  • Prints the transcript to stdout for direct CLI use.

Models

  • ASR: damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch
  • VAD: damo/speech_fsmn_vad_zh-cn-16k-common-pytorch
  • Punctuation: damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch

External Endpoints

EndpointPurposeData sent
https://pypi.tuna.tsinghua.edu.cn/simpleInstall Python packages during setupPackage names and installer metadata requested by pip
ModelScope and/or Hugging Face endpoints used by FunASR dependenciesDownload model files on first runModel identifiers and standard HTTP request metadata

Security & Privacy

  • Audio files are read from the local machine and processed locally by FunASR.
  • The transcription flow does not intentionally upload audio content to a cloud ASR API.
  • Network access is still required during setup and first-run model download.
  • The generated transcript is written to a local .txt file next to the source audio unless the write step fails.
  • This skill does not require API keys or other secrets by default.

Model Invocation Note

Autonomous invocation is normal for this skill. If a user asks to transcribe local audio, an agent may install dependencies and run the helper scripts unless the user explicitly opts out of dependency installation or network access.

Trust Statement

By using this skill, package and model downloads may be fetched from third-party upstream sources such as the configured PyPI mirror and model hosting providers. Only install and use this skill if you trust those upstream sources.

Troubleshooting

  • python3 not found: install Python 3.7+ and rerun scripts/install.sh.
  • Install fails in the existing environment: rerun scripts/install.sh --force to recreate the virtual environment.
  • First transcription is slow: initial model downloads can take several minutes.
  • GPU is desired: edit scripts/transcribe.py and change device="cpu" to a CUDA device after installing the correct CUDA build.

Comments

Loading comments...