AI Clipping
AI-powered highlight extraction that identifies the most engaging moments and generates viral-ready clips. Ideal for social media content creation, highlight...
Like a lobster shell, security has layers — review code before you run it.
License
Runtime requirements
SKILL.md
AI Clipping & Highlights
This skill automatically extracts the most engaging parts of a video based on the WayinVideo API.
Execution Workflow
Step 0: Check API Key
Check if the WAYIN_API_KEY is available in the environment or user context. If it is missing, ask the user to provide it or create one at https://wayin.ai/wayinvideo/api-dashboard.
Step 1: Identify Video Source
Determine if the input is a web URL (e.g., YouTube link) or a local file path.
[!IMPORTANT] The WayinVideo API supports the following platforms for direct URL processing: YouTube, Vimeo, Dailymotion, Kick, Twitch, TikTok, Facebook, Instagram, Zoom, Rumble, Google Drive. If the platform is NOT supported, you must treat it as a local file (download it first if possible, then upload).
Step 2: Upload (Local Files or Unsupported URLs Only)
If the input is a local file or from an unsupported platform, you MUST upload it first to get an identity token:
python3 <ABS_PATH_TO_SKILL>/scripts/upload_video.py --file-path <file_path>
(If the input is a web URL from a supported platform, skip this step.)
Step 3: Extract Clips
Submit the video for clipping using the URL or the identity (from Step 2):
python3 <ABS_PATH_TO_SKILL>/scripts/submit_task.py --url "<url_or_identity>" [options]
This script will output the Project ID and the path to an initial result JSON file in your workspace. Save both values for polling the results later.
Options:
--target <lang>: (Optional) Target language for output content. Auto-detected if omitted. If specified, you MUST readassets/supported_languages.mdfirst to find the correct language code.--duration <duration>: (Optional) Expected duration range for each output clip. Allowed values:DURATION_0_30(0-30s),DURATION_0_90(0-90s),DURATION_30_60(30-60s),DURATION_60_90(60-90s),DURATION_90_180(90-180s),DURATION_180_300(180-300s). Defaults toDURATION_0_90. If the user specifies a platform, you MUST readassets/platform_duration.mdfirst to determine the correct mapping.--name <string>: (Optional) A custom name for this task.--export: (Optional) Enable rendering of clips (returns export links).--top-k <int>: (Optional) The best K clips to export. Defaults to10. Pass-1to export all extracted clips.--ratio <ratio>: (Optional) Aspect ratio:RATIO_16_9,RATIO_1_1,RATIO_4_5,RATIO_9_16. Defaults toRATIO_9_16. AI reframing is automatically enabled. If the user specifies a platform, you MUST readassets/platform_ratio.mdfirst to determine the correct aspect ratio. (Used with--export)--resolution <res>: (Optional) Output resolution:SD_480,HD_720,FHD_1080. Defaults toHD_720. (Used with--export)--caption-display <mode>: (Optional) Caption mode:none,both,original,translation. Defaults tooriginal(ortranslationif--targetis provided). Passnoneto explicitly disable captions. (Used with--export)--cc-style-tpl <id>: (Optional) Caption style template ID. Defaults totemp-static-2if--caption-displayisboth, otherwisetemp-0. Seeassets/caption_style.mdfor details. (Used with--exportand--caption-display)--save-dir <path>: (Optional) The directory where the initial result JSON file will be saved. Defaults toapi_resultsin your workspace.
[!TIP]
- Use the
--exportflag by default. This ensures you receive downloadable links for the clips immediately. While rendering adds extra processing time, it avoids the need to re-run the task later to get the video files. Skip this flag only if the user specifically requests the raw analysis results as quickly as possible without video rendering.- To include subtitles in the dedicated language in the output video, use:
--export --caption-display translation --target <lang>.- If
--caption-displayis set toboth, you MUST use a template ID starting withtemp-static-.- If the user specifies the lower or upper bound of clip duration, choose an appropriate value for
--durationthat does not violate the constraint.
Step 4: Wait for Results & Monitoring
Immediately after Step 3, start the polling script to get the final results:
python3 <ABS_PATH_TO_SKILL>/scripts/polling_results.py --project-id <project_id> --save-file <save_file_path> [--event-interval 300]
[!TIP]
- This script involves API polling and may take several minutes. Always use a subagent to run this task whenever possible. Once the sub-agent is started, MUST inform the user that the task is processing in the background, results will be provided immediately once available, and you are free to help the user with other tasks in the meantime.
- If your agent framework is OpenClaw (which offers
openclawCLI for sending systemEvent), it's recommended to add--event-interval 300to enable continuous progress updates via system events (default is 0/disabled).
Subagent Reference Prompt (Main agent provides the specific steps):
"Set WAYIN_API_KEY=<your_key> in the environment, then run python3 <ABS_PATH_TO_SKILL>/scripts/polling_results.py --project-id <id> --save-file <path>. Whether the polling script succeeds or fails, you MUST report the script's output. Exit immediately after reporting."
The main agent must explicitly include the Project ID and file path from Step 3 in the command given to the subagent. The main agent will read the saved JSON file to process and present the results.
If --event-interval is set and this script runs in an OpenClaw subagent, it triggers a systemEvent periodically to keep you updated:
- Receive Reminder: When you receive a reminder, check the subagent's status.
- Status Check:
- If the subagent is still active, notify the user that processing is ongoing (e.g., "Processing is still in progress; as the video is quite long, it may take a bit more time").
- If the subagent is no longer active (crashed or stopped), notify the user and offer to retry (start the polling again or resubmit the task).
Step 5: Report Results
Once the script completes and outputs the SUCCESS: Raw API result updated at <path>, read that file and present the viral clips and highlights to the user.
[!NOTE]
- The saved JSON file can be quite large. Before reading, check the line numbers or file size. If the file is large, process the file in chunks. Do not attempt to read a very large file into the session context at once.
- When using
--export, theexport_linkreturned by the API is valid for 24 hours.- When presenting
export_linkor other URLs to the user, NEVER truncate, shorten, or summarize the links. Provide the full, original URL to ensure the user can access the content.- To download the video, use:
curl -L -o <filename> "<export_link>"- The entire project/results expire after 3 days. After this period, the task must be re-run.
- If it has been more than 24 hours but less than 3 days, refresh the
export_linkby running:curl -s -H "Authorization: Bearer $WAYIN_API_KEY" -H "x-wayinvideo-api-version: v2" "https://wayinvideo-api.wayin.ai/api/v2/clips/results/<project_id>". Then parse the JSON to get the newexport_link.
Files
9 totalComments
Loading comments…
