Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

VEED UGC

v1.0.1

Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.

5· 1.1k·3 current·3 all-time
byPaul de Lavallaz@pauldelavallaz

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for pauldelavallaz/veed-ugc.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "VEED UGC" (pauldelavallaz/veed-ugc) from ClawHub.
Skill page: https://clawhub.ai/pauldelavallaz/veed-ugc
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install pauldelavallaz/veed-ugc

ClawHub CLI

Package manager switcher

npx clawhub@latest install veed-ugc
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description (generate UGC videos with lip-sync/TTS) match the code and SKILL.md: the script uploads an image, queues a run at ComfyDeploy, polls, and downloads the result. Mentioning ElevenLabs is reasonable because a voice_id is used, but ElevenLabs credentials are not required by the bundled code (ComfyDeploy is the service actually contacted).
!
Instruction Scope
SKILL.md and the included script instruct the agent to upload user-supplied images and script text to https://api.comfydeploy.com and to include an Authorization Bearer API key. The code also prints debug info and the first 500 characters of API responses to stdout, which can leak sensitive data (including tokens or returned URLs). The skill will transmit user images and text to an external service — this is expected for the stated purpose but is a privacy/data-exfiltration consideration that must be explicit to users.
Install Mechanism
There is no install spec (instruction-only plus an included Python script). Nothing is downloaded from arbitrary URLs and no install-time code execution is requested. Risk from install mechanism is low.
!
Credentials
The skill manifest declares no required env vars, but the script requires a COMFY_DEPLOY_API_KEY either via --api-key or environment (COMFY_DEPLOY_API_KEY). That mismatch is incoherent and could confuse users. No ElevenLabs secret is requested (because ComfyDeploy handles TTS), which is plausible, but the SKILL.md's voice/ElevenLabs messaging could mislead users into thinking an ElevenLabs key is required. The script's debug printing of response bodies can expose credentials or other sensitive return values to logs.
Persistence & Privilege
The skill is not always-enabled and does not request persistence or modify other skill/system settings. It runs on-demand and does not escalate privileges.
What to consider before installing
Before installing or running this skill, note that: (1) the included Python script will upload any image and the script text you provide to https://api.comfydeploy.com — do not upload images of real people without explicit consent; (2) the script requires a ComfyDeploy API key (COMFY_DEPLOY_API_KEY or --api-key) even though the skill metadata does not list that requirement — this mismatch is suspicious and you should provide a key with minimal privileges; (3) the script prints API responses (first 500 chars) to stdout which can leak sensitive info to logs — review or remove these debug prints if you care about secrecy; (4) ElevenLabs is referenced only for voice IDs; no ElevenLabs credential is included, because TTS is performed by the ComfyDeploy workflow — verify this behavior with the service owner; (5) the source/homepage is unknown: prefer packages with a verifiable source or inspect the code thoroughly and run in an isolated environment. If you are unsure, do not use with real users' images or private scripts until you confirm the service/policy and fix the manifest mismatch (declare the COMFY_DEPLOY_API_KEY requirement).

Like a lobster shell, security has layers — review code before you run it.

latestvk975e6rs6wjjh8knjdwvs9dtgs8106y7
1.1kdownloads
5stars
2versions
Updated 6h ago
v1.0.1
MIT-0

Veed-UGC

Generate UGC (User Generated Content) style promotional videos with AI lip-sync using ComfyDeploy's Veed-UGC workflow.

Overview

Veed-UGC transforms static images into dynamic promotional videos:

  1. Takes a photo of a person with a product (from Morpheus or Ad-Ready)
  2. Receives a script (pure dialogue text)
  3. Creates a lip-synced video of the person speaking the script

Perfect for creating authentic-feeling promotional content at scale.

API Details

Endpoint: https://api.comfydeploy.com/api/run/deployment/queue Deployment ID: 627c8fb5-1285-4074-a17c-ae54f8a5b5c6

Required Inputs

InputDescriptionExample
imageURL of person+product imageOutput from Morpheus/Ad-Ready
scriptPure dialogue text"Hola che! Cómo anda todo por allá?"
voice_idElevenLabs voice IDDefault: PBi4M0xL4G7oVYxKgqww

⚠️ CRITICAL: Script Format

The script input must be PURE DIALOGUE ONLY:

CORRECT:

Hola che! Cómo anda todo por allá? Mirá esto que acabo de probar, una locura total.

WRONG - No annotations:

[Entusiasta] Hola che! (pausa) Cómo anda?

WRONG - No tone directions:

Tono argentino informal: Hola che!

WRONG - No stage directions:

*sonríe* Hola che! *levanta el producto*

WRONG - No titles/labels:

ESCENA 1:
Hola che!

Just write exactly what the person should say. Nothing else.

Voice IDs (ElevenLabs)

VoiceIDDescription
DefaultPBi4M0xL4G7oVYxKgqwwMain voice

More voices can be added from ElevenLabs

Usage

uv run ~/.clawdbot/skills/veed-ugc/scripts/generate.py \
  --image "https://example.com/person-with-product.png" \
  --script "Hola! Les quiero mostrar este producto increíble que acabo de probar." \
  --output "ugc-video.mp4"

With local image file:

uv run ~/.clawdbot/skills/veed-ugc/scripts/generate.py \
  --image "./morpheus-output.png" \
  --script "Mirá, yo antes no usaba esto pero ahora no puedo vivir sin él." \
  --voice-id "PBi4M0xL4G7oVYxKgqww" \
  --output "promo-video.mp4"

Direct API Call

const response = await fetch("https://api.comfydeploy.com/api/run/deployment/queue", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_API_KEY"
  },
  body: JSON.stringify({
    "deployment_id": "627c8fb5-1285-4074-a17c-ae54f8a5b5c6",
    "inputs": {
      "image": "/* put your image url here */",
      "voice_id": "PBi4M0xL4G7oVYxKgqww",
      "script": "Hola che! Cómo anda todo por allá?"
    }
  })
});

Workflow Integration

Typical Pipeline

  1. Generate image with Morpheus/Ad-Ready

    uv run morpheus... --output product-shot.png
    
  2. Write the script (pure dialogue)

  3. Create UGC video from the image

    uv run veed-ugc... --image product-shot.png --script "..." --output promo.mp4
    

Output

The workflow outputs an MP4 video file with:

  • The original image animated with lip-sync
  • AI-generated voiceover from the script
  • Natural head movements and expressions

Notes

  • Image should clearly show a person's face (frontal or 3/4 view works best)
  • Script is spoken exactly as written - no interpretation
  • Video length depends on script length
  • Processing time: ~2-5 minutes depending on script length

Comments

Loading comments...