Local Video Ad Pipeline v0.5
PassAudited by ClawScan on May 9, 2026.
Overview
This appears to be a legitimate local video-generation workflow, but it runs local media and AI-server tools and has setup requirements that are not fully declared in metadata.
Before installing, make sure you are comfortable running local Python scripts, ffmpeg, ComfyUI, ACE-Step, and WSL/GPU workloads. Use trusted local endpoints, verify project and output paths, review generated prompts and shotlists before batch rendering, and explicitly state conservative or brand-safe creative requirements if the default glamour style is not desired.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installation may fail or require manual dependency decisions, and users must rely on the included scripts rather than a declared, pinned install process.
The package includes runnable scripts and visible docs/scripts expect local tools and Python packages, but the registry metadata does not declare those requirements. This is an under-declared setup/provenance issue, not evidence of hidden malicious behavior.
Source: unknown ... Required binaries (all must exist): none ... No install spec — this is an instruction-only skill. ... Code file presence 14 code file(s)
Review the scripts before use, install dependencies from trusted sources, and verify ffmpeg, ComfyUI, ACE-Step, Python packages, and model files manually.
The skill can overwrite selected output files and process local media through ffmpeg.
The compose step runs local ffmpeg/ffprobe subprocesses to process video, subtitles, and audio. This is expected for a video pipeline, but it is still local command execution and file output.
cmd = ["ffmpeg", "-y", "-i", str(src), "-vf", vf, ...] r = subprocess.run(cmd, capture_output=True, text=True)
Run it only on intended project folders, verify output paths before execution, and use trusted ffmpeg/ffprobe binaries.
A mistaken or remote ComfyUI URL could expose prompts/job details or run costly render jobs on the chosen server.
The video-render step submits generated workflows to a configurable ComfyUI endpoint. This is purpose-aligned, but if the endpoint is not local/trusted, prompt and job metadata may be sent to that server and queued jobs can consume resources.
urllib.request.Request(f"{d['comfy']}/prompt", data=body, headers={"Content-Type": "application/json"})Use the default localhost endpoint or another trusted ComfyUI server, and review shotlists/prompts before queuing batches.
Local GPU memory, WSL services, or queued jobs may remain active until the user stops them.
The workflow intentionally keeps local ComfyUI/Wan models loaded between shots for performance. This is disclosed and user-controlled, but it leaves GPU/server state active after individual script runs.
Never restart WSL ComfyUI between shots unless it is wedged. The first fp16 load is slow. Warm runs are much faster because the models stay resident.
Monitor the local ComfyUI/WSL process and shut it down when rendering is finished.
Generated ads may include sexualized adult glamour styling by default if the user does not override the casting/style.
The skill sets a strong default creative direction toward adult glamour imagery unless the user specifies otherwise. This is disclosed in the skill text, but may surprise users expecting a neutral ad-production pipeline.
Default female protagonist casting ... adult glamour, sensual styling, fitted silhouettes, fashion/swimwear/lingerie ... clearly defined G-cup bust silhouette through clothing.
Specify brand-safe, conservative, or alternative casting/style requirements explicitly when invoking the skill.
