sampling-and-indexing

v0.1.0

Standardize video sampling and frame indexing so interval instructions and mask frames stay aligned with a valid key/index scheme.

0· 69·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wu-uk/dynamic-object-aware-egomotion-sampling-and-indexing.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "sampling-and-indexing" (wu-uk/dynamic-object-aware-egomotion-sampling-and-indexing) from ClawHub.
Skill page: https://clawhub.ai/wu-uk/dynamic-object-aware-egomotion-sampling-and-indexing
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install dynamic-object-aware-egomotion-sampling-and-indexing

ClawHub CLI

Package manager switcher

npx clawhub@latest install dynamic-object-aware-egomotion-sampling-and-indexing
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description align with the SKILL.md content (video sampling and index/key hygiene). The pseudocode references Python/OpenCV (cv2) but the skill declares no required binaries or runtime — that's an operational omission (you will need OpenCV or another video reader to run this), not a security red flag.
Instruction Scope
Instructions are narrowly scoped to reading video metadata, choosing sample IDs, and ensuring downstream outputs match those IDs. They only reference local video files and per-frame artifact stores; there are no steps that transmit data externally or access unrelated system config.
Install Mechanism
Instruction-only skill with no install spec and no code files. Nothing is written to disk by the skill itself and there are no external downloads.
Credentials
No environment variables, credentials, or config paths are requested. The skill's needs (access to video files and a video-reading library at runtime) are proportional to its stated purpose.
Persistence & Privilege
always is false and the skill does not request persistent system modifications or elevated privileges. Autonomous invocation is allowed (platform default) but not combined with other risky requests.
Assessment
This skill is narrowly focused and coherent, but note a few practical points before installing or letting an agent run it: (1) the pseudocode uses Python/OpenCV (cv2) — ensure the runtime where the agent runs has a video-reading library available if you expect it to execute the steps; (2) the instructions require the agent to read video files from local paths, so confirm the agent's file-access policy and limit which directories it may read to avoid accidental exposure of sensitive files; (3) the SKILL.md asks you to choose and document whether interval end indices are inclusive or exclusive — decide this up front to avoid downstream mismatches; (4) because this is instruction-only, there is no binary/install risk, but if you or the agent implementers add execution code, review that code (and any third-party packages) for supply-chain concerns.

Like a lobster shell, security has layers — review code before you run it.

latestvk972qab2hvndatnzy19kbr8n5984vjac
69downloads
0stars
1versions
Updated 1w ago
v0.1.0
MIT-0

When to use

  • You need to decide a sampling stride/FPS and ensure all downstream outputs (interval instructions, per-frame artifacts, etc.) cover the same frame range with consistent indices.

Core steps

  • Read video metadata: frame count, fps, resolution.
  • Choose a sampling strategy (e.g., every 10 frames or target ~10–15 fps) to produce sample_ids.
  • Only produce instructions and masks for sample_ids; the max index must be < total_frames.
  • Use a strict interval key format such as "{start}->{end}" (integers only). Decide (and document) whether end is inclusive or exclusive, and be consistent.

Pseudocode

import cv2
VIDEO_PATH = "<path/to/video>"
cap=cv2.VideoCapture(VIDEO_PATH)
n=int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps=cap.get(cv2.CAP_PROP_FPS)
step=10  # example
sample_ids=list(range(0, n, step))
if sample_ids[-1] != n-1:
    sample_ids.append(n-1)
# Generate all downstream outputs only for sample_ids

Self-check list

  • sample_ids strictly increasing, all < total frame count.
  • Output coverage max index matches sample_ids[-1] (or matches your documented sampling policy).
  • JSON keys are plain start->end, no extra text.
  • Any per-frame artifact store (e.g., NPZ) contains exactly the sampled frames and no extras.

Comments

Loading comments...