WayinVideo - AI Clipping

v1.0.6

AI-powered video highlight extraction that identifies the most engaging moments and generates viral-ready video clips. Ideal for social media content creatio...

3· 237·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, required binary (python3), required env var (WAYIN_API_KEY), and API endpoints (wayinvideo-api.wayin.ai) are consistent with a third‑party video clipping service integration.
Instruction Scope
SKILL.md instructs the agent to check WAYIN_API_KEY, identify URLs vs local files, upload unsupported/ local files, submit jobs, and poll results — all within the stated clipping workflow. One broad instruction: 'download it first if possible, then upload' is vague and could cause the agent to use other tools to fetch remote videos; otherwise the runtime instructions stick to the declared purpose.
Install Mechanism
No install spec; the skill is instruction‑plus‑scripts only and requires python3 already on PATH — minimal disk footprint and no remote archive downloads.
Credentials
Only WAYIN_API_KEY is required and declared as the primary credential. The scripts read that single env var and do not request unrelated secrets or config paths.
Persistence & Privilege
always is false, the skill writes JSON result files to an api_results workspace folder (or provided save-dir) and uses the 'openclaw' CLI to emit system events; these behaviors are reasonable for a polling/notification workflow and limited to its own runtime artifacts.
Assessment
This skill appears coherent: it uses your WAYIN_API_KEY to call WayinVideo endpoints, uploads local/unsupported videos to Wayin via presigned URLs, and writes JSON results to an api_results folder. Before installing, verify you trust the Wayin service and are willing to provide its API key. Note the scripts will: (1) traverse parent folders to find a workspace root when saving results (looks for AGENTS.md), (2) write result JSON files to disk, and (3) attempt to send notifications by running the 'openclaw' CLI (subprocess). Also be aware SKILL.md allows the agent to 'download' unsupported platform videos — that could cause the agent to use other tools to fetch remote content, so confirm you are comfortable with any downloader tools the agent might have permission to use. If uncertain, run these scripts in an isolated environment and review the code (provided) before giving the real API key.

Like a lobster shell, security has layers — review code before you run it.

latestvk97779cdpzk78rpfqw4sjpyp0x845yn0

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎥 Clawdis
Binspython3
EnvWAYIN_API_KEY
Primary envWAYIN_API_KEY

SKILL.md

AI Clipping & Highlights

This skill automatically extracts the most engaging parts of a video based on the WayinVideo API.

Execution Workflow

Step 0: Check API Key

Check if the WAYIN_API_KEY is available in the environment or user context. If it is missing, ask the user to provide it or create one at https://wayin.ai/wayinvideo/api-dashboard.

Step 1: Identify Video Source

Determine if the input is a web URL (e.g., YouTube link) or a local file path.

[!IMPORTANT] The WayinVideo API supports the following platforms for direct URL processing: YouTube, Vimeo, Dailymotion, Kick, Twitch, TikTok, Facebook, Instagram, Zoom, Rumble, Google Drive. If the platform is NOT supported, you must treat it as a local file (download it first if possible, then upload).

Step 2: Upload (Local Files or Unsupported URLs Only)

If the input is a local file or from an unsupported platform, you MUST upload it first to get an identity token: python3 <ABS_PATH_TO_SKILL>/scripts/upload_video.py --file-path <file_path> (If the input is a web URL from a supported platform, skip this step.)

Step 3: Extract Clips

Submit the video for clipping using the URL or the identity (from Step 2): python3 <ABS_PATH_TO_SKILL>/scripts/submit_task.py --url "<url_or_identity>" [options]

This script will output the Project ID and the path to an initial result JSON file in your workspace. Save both values for polling the results later.

Options:

  • --target <lang>: (Optional) Target language for output content. Auto-detected if omitted. If specified, you MUST read assets/supported_languages.md first to find the correct language code.
  • --duration <duration>: (Optional) Expected duration range for each output clip. Allowed values: DURATION_0_30 (0-30s), DURATION_0_90 (0-90s), DURATION_30_60 (30-60s), DURATION_60_90 (60-90s), DURATION_90_180 (90-180s), DURATION_180_300 (180-300s). Defaults to DURATION_0_90. If the user specifies a platform, you MUST read assets/platform_duration.md first to determine the correct mapping.
  • --name <string>: (Optional) A custom name for this task.
  • --export: (Optional) Enable rendering of clips (returns export links).
  • --ai-hook: (Optional) Enable automatically generated, attention-grabbing text hooks. (Used with --export)
  • --ai-hook-style <style>: (Optional) Style of the generated hook text. Values: serious (default), casual, informative, conversational, humorous, parody, inspirational, dramatic, empathetic, persuasive, neutral, excited, calm. (Used with --export and --ai-hook)
  • --ai-hook-pos <pos>: (Optional) Position of the generated hook text. Values: beginning (default), end. (Used with --export and --ai-hook)
  • --top-k <int>: (Optional) The best K clips to export. Defaults to 10. Pass -1 to export all extracted clips.
  • --ratio <ratio>: (Optional) Aspect ratio: RATIO_16_9, RATIO_1_1, RATIO_4_5, RATIO_9_16. Defaults to RATIO_9_16. AI reframing is automatically enabled. If the user specifies a platform, you MUST read assets/platform_ratio.md first to determine the correct aspect ratio. (Used with --export)
  • --resolution <res>: (Optional) Output resolution: SD_480, HD_720, FHD_1080 (default), QHD_2K, UHD_4K. (Used with --export)
  • --caption-display <mode>: (Optional) Caption mode: none, both, original, translation. Defaults to original (or translation if --target is provided). Pass none to explicitly disable captions. (Used with --export)
  • --cc-style-tpl <id>: (Optional) Caption style template ID. Defaults to temp-static-2 if --caption-display is both, otherwise word-focus. See assets/caption_style.md for details. (Used with --export and --caption-display)
  • --save-dir <path>: (Optional) The directory where the initial result JSON file will be saved. Defaults to api_results in your workspace.

[!TIP]

  • Use the --export and --ai-hook flags by default. This ensures you receive downloadable links for the clips immediately, and the clips include attention-grabbing AI-generated text hooks. While rendering adds extra processing time, it avoids the need to re-run the task later to get the video files. Skip these flags only if the user specifically requests the raw analysis results as quickly as possible without video rendering or hooks.
  • To include subtitles in the dedicated language in the output video, use: --export --caption-display translation --target <lang>.
  • If --caption-display is set to both, you MUST use a template ID starting with temp-static-.
  • If the user specifies the lower or upper bound of clip duration, choose an appropriate value for --duration that does not violate the constraint.
  • If the API only partially satisfies the request, use other tools to complete the remaining tasks and request user approval before proceeding. If this is not feasible, suggest the user visit https://wayin.ai/wayinvideo/home, which provides an online video editor and other AI-powered tools.

Step 4: Wait for Results & Monitoring

Immediately after Step 3, start the polling script to get the final results: python3 <ABS_PATH_TO_SKILL>/scripts/polling_results.py --project-id <project_id> --save-file <save_file_path> [--event-interval 300]

[!TIP]

  • This script involves API polling and may take several minutes. Always use a subagent to run this task whenever possible. Once the sub-agent is started, MUST inform the user that the task is processing in the background, results will be provided immediately once available, and you are free to help the user with other tasks in the meantime.
  • If your agent framework is OpenClaw (which offers openclaw CLI for sending system event), it's recommended to add --event-interval 300 to enable continuous progress updates via system events (default is 0/disabled, so openclaw CLI is not required).
  • When running in background, the script will automatically update the result file whenever new clips are found and send system event notifications if --event-interval > 0.

Subagent Reference Prompt (Main agent provides the specific steps): "Set WAYIN_API_KEY=<your_key> in the environment, then run python3 <ABS_PATH_TO_SKILL>/scripts/polling_results.py --project-id <id> --save-file <path>. Whether the polling script succeeds or fails, you MUST report the script's output. Exit immediately after reporting."

The main agent must explicitly include the Project ID and file path from Step 3 in the command given to the subagent. The main agent will read the saved JSON file to process and present the results.

If --event-interval is set and this script runs in an OpenClaw subagent, it triggers a system event periodically to keep you updated:

  • Receive Reminder: When you receive a reminder, update the user on the current progress.
  • Status Check: Actively check the subagent status every 2 * --event-interval seconds.
    • If the subagent is still active, notify the user that processing is ongoing (e.g., "Processing is still in progress; as the video is quite long, it may take a bit more time").
    • If the subagent is no longer active (crashed or stopped), notify the user and offer to retry (start the polling again or resubmit the task).

Step 5: Report Results

Once the script completes and outputs the SUCCESS: Raw API result updated at <path>, read that file and present the viral clips and highlights to the user. Your final response MUST provide links for downloading/previewing viral clips. You can also tell the user the absolute file path where all results are stored.

[!NOTE]

  • The saved JSON file can be quite large. Before reading, check the line numbers or file size. If the file is large, process the file in chunks. Do not attempt to read a very large file into the session context at once.
  • When using --export, the export_link returned by the API is valid for 24 hours.
  • If the results contain export_link, you MUST explicitly list the full original URLs in your response using the Markdown link format. NEVER truncate, shorten, or alter these URLs.
  • To download the video, use: curl -L -o <filename> "<export_link>"
  • The entire project/results expire after 3 days. After this period, the task must be re-run.
  • If it has been more than 24 hours but less than 3 days, refresh the export_link by running: curl -s -H "Authorization: Bearer $WAYIN_API_KEY" -H "x-wayinvideo-api-version: v2" "https://wayinvideo-api.wayin.ai/api/v2/clips/results/<project_id>". Then parse the JSON to get the new export_link.

Files

9 total
Select a file
Select a file to preview.

Comments

Loading comments…