Douyin Video Analysis
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill mostly matches its Douyin-analysis purpose, but it uses your live Chrome session cookies to fetch media and relies on local helper tooling, so it should be reviewed before use.
Use this only if you are comfortable letting the skill open Douyin in Chrome, use your logged-in browser session for media download, run local Python transcription helpers, and write results into Obsidian. Before installing, confirm the browser bridge path, venv, and Obsidian vault path are yours, and prefer changes that restrict cookie use to verified Douyin media hosts.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Your logged-in browser session may be used to download media. If the URL or selected resource is not the intended Douyin media endpoint, session cookies could be sent with a request the user did not explicitly approve.
The helper reads cookies from the currently loaded browser page and forwards them in an authenticated curl request. The code does not visibly enforce that the input URL or selected media URL is limited to Douyin/ByteDance hosts before using the cookie.
cookie: document.cookie ... "-H", f"Cookie: {cookie}" ... download_audio(audio_url, data.get("url") or url, cookie, ua, out_path)Declare browser-cookie/session use in metadata, restrict accepted input and media hosts to known Douyin/ByteDance domains, require confirmation before cookie-authenticated downloads, and avoid logging or storing cookies.
The skill may fail outside the author's machine, and users must trust an additional local browser-automation component to interact with Chrome.
The skill relies on a hardcoded external browser-automation bridge that is not one of this skill's included helper files. That bridge has meaningful browser authority and its provenance is outside the reviewed package.
CHROME_BRIDGE = "/Users/bobzhong/.openclaw/workspace/skills/browser-automation-bridge/scripts/chrome_bridge.py"
Package or explicitly declare the browser bridge dependency, document its source and permissions, and let users verify or configure the path before running the skill.
Running the skill executes local Python and model code on the user's machine, so the local venv and dependencies need to be trusted.
The transcription step runs Python code from a temporary virtual environment to invoke mlx-whisper. This is central to the stated purpose, but it is still local code execution from a temp-path dependency.
VENV_PY = '/tmp/douyin_transcribe/venv/bin/python3' ... p = subprocess.run([VENV_PY, '-c', code], capture_output=True, text=True)
Use a trusted, user-owned virtual environment, pin and document dependencies, and avoid running the helper until the venv and model source are verified.
Untrusted content from a video can become a persistent note. If later read by an agent, it should be treated as source material rather than instructions.
The skill saves machine transcripts and analysis skeletons into an Obsidian vault, which may be synced and later reused as agent context.
VAULT = Path('/Users/bobzhong/Library/Mobile Documents/iCloud~md~obsidian/Documents/BobVault') ... write_note(transcript_path, transcript_md)Review generated notes before reusing them, label video-derived text as untrusted transcript content, and consider storing these notes in a separate folder from trusted agent instructions.
