Youtube Transcription Generator
Analysis
This instruction-only skill is purpose-aligned for transcribing YouTube videos, but users should notice that it requires local CLI setup, a VLM Run API key, downloading media, and sending video content to an external provider.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
User provides a **YouTube URL** ... Download the video ... with **yt-dlp**. Run: `vlmrun chat ... -i <downloaded_file> -o <output_dir>`.
The skill directs the assistant to chain local CLI tools using a user-provided URL and output path. This is expected for the transcription workflow, but it still means commands can download media and write files.
`uv pip install -r requirements.txt` ... `python scripts/run_transcription.py "https://www.youtube.com/watch?v=VIDEO_ID" -o ./output`
The instructions reference a requirements file and helper script, but the provided manifest contains only SKILL.md. The package-install and script-run steps are central to the stated purpose, yet the referenced files are not present for review.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
Ensure `.env` (or `.env.local`) contains `VLMRUN_API_KEY`.
The skill requires a provider API key and instructs the assistant to check for it. This is purpose-aligned for VLM Run, but the registry metadata does not declare required env vars or a primary credential.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
Transcribes the video with **vlmrun** (Orion visual AI) ... `vlmrun chat "Transcribe this video..." -i <downloaded_file> -o <output_dir>`.
The workflow sends the downloaded video file to the VLM Run provider for transcription. This external processing is disclosed and purpose-aligned, but users should understand that media content leaves the local environment.
