Video Crawler
ReviewAudited by ClawScan on May 10, 2026.
Overview
The video crawler mostly matches its stated purpose, but it includes an unrelated hardcoded API key and under-declared manual dependencies, so it should be reviewed before use.
Review or remove the hardcoded API key before installing. If you still use the skill, install dependencies from trusted sources, keep downloads in a safe temporary folder, and only run it for video URLs you explicitly want to fetch.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the key is valid, it may expose or allow misuse of someone else's provider account, and it signals poor credential handling in the package.
The code embeds a third-party API key even though the skill's stated purpose is only downloading Douyin/Twitter videos and the metadata declares no required credentials.
DASHSCOPE_API_KEY = "sk-c0f66..."
Remove the hardcoded key, rotate/revoke it if real, and only declare credentials through documented environment variables if they are actually required.
The tool may fail until dependencies are installed, and unpinned package installs can change over time or pull code from the package ecosystem.
The skill requires manual installation of unpinned packages and a browser runtime, while the registry lists no install spec or required binaries.
pip install playwright requests yt-dlp playwright install chromium
Install dependencies only from trusted package indexes, consider pinning versions, and declare required binaries/dependencies in the skill metadata or install spec.
If used on the wrong URL or with an unsafe output path, the agent could download unexpected content or overwrite a local file chosen by the invocation.
The Twitter path invokes yt-dlp with a user-provided URL and output file. It uses an argument list rather than a shell, and this is aligned with the video-download purpose, but it is still a powerful external downloader.
subprocess.run(cmd, capture_output=True, text=True, timeout=120)
Run it only for user-approved Douyin/X links and write outputs to a dedicated downloads or temporary directory.
