meeting-to-text

ReviewAudited by ClawScan on May 10, 2026.

Overview

The skill mostly matches a local transcription workflow, but it can download and load a speaker model at runtime despite being described as fully local.

Install only if you trust the local Python environment, FFmpeg binary, 3D-Speaker repo, and ModelScope model source. If you require a strictly offline workflow, pre-install and verify all models first, block network access during runs, and confirm the output path before invoking the skill.

Findings (2)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A first run may contact an external model source and rely on downloaded model artifacts, which can surprise users expecting an offline-only tool and introduces supply-chain trust requirements.

Why it was flagged

If the speaker model cache is missing, the runtime can fetch a remote model artifact and then load the checkpoint, while the registry has no install spec and the skill is framed as a fully local workflow.

Skill content
from modelscope.hub.snapshot_download import snapshot_download ... downloaded = snapshot_download(SPEAKER_MODEL_ID, revision=SPEAKER_MODEL_REVISION, cache_dir=str(THREE_D_SPEAKER_CACHE)) ... state_dict = torch.load(str(checkpoint_path), map_location="cpu")
Recommendation

Document the network/model download clearly, make it an explicit setup step or opt-in, pin and verify model artifacts where possible, and declare the required local dependencies in metadata or installation instructions.

What this means

The skill will execute local media-processing software against the file path you provide.

Why it was flagged

The skill runs a local FFmpeg executable on the user-selected media file; this is expected for audio/video normalization and is invoked without `shell=True`.

Skill content
command = [str(FFMPEG_EXE), "-hide_banner", "-loglevel", "error", "-y", "-i", str(source_path), ...]; completed = subprocess.run(command, capture_output=True, text=True, encoding="utf-8", errors="replace")
Recommendation

Use a trusted FFmpeg binary and run the skill only on files you intend to process.