Camera YOLO Operator | 摄像头 YOLO 操控者

v1.2.0

操作本地摄像头,运行 YOLO 目标检测和 DA3Metric 深度估计。 支持:纯摄像头抓拍、YOLO 目标检测、YOLO+深度距离叠加、通用目标轨迹跟踪。 触发词:摄像头、webcam、YOLO、目标检测、抓拍、景深、距离测定、轨迹跟踪、行人跟踪、车辆跟踪。 适用平台:Linux / Windows / ma...

0· 271·1 current·1 all-time
byMorois@moroiser
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description align with code and scripts: webcam capture, YOLO detection, DA3Metric depth estimation, and tracking. The files and required Python packages are appropriate for these functions.
Instruction Scope
Runtime instructions tell the agent to read the local camera, save images/videos, and download models from Ultralytics/HuggingFace. The skill reads standard environment variables (OPENCLAW_WORKSPACE, YOLO_MODEL_PATH, proxy vars) and may suggest changing device permissions (/dev/video*) in the deployment docs — this is related to camera access but is a privileged filesystem operation the user should understand before running.
Install Mechanism
No automated install spec in the registry; installation is manual via pip -r requirements.txt and a download script. Model downloads occur at runtime (ultralytics and HuggingFace). These downloads fetch large binary model files from public hosts (expected for ML skills) — not from obscure or short‑linked URLs.
Credentials
The skill does not require secrets or credentials. It optionally reads OPENCLAW_WORKSPACE, YOLO_MODEL_PATH and common proxy variables to locate models or adjust downloads — these are reasonable for a local model-based camera tool.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or global agent configuration. It writes outputs and model files to the user's workspace directory (standard for this kind of tool).
Assessment
This skill appears coherent for local webcam detection and depth estimation, but review and consider the following before installing: - Privacy & camera access: the scripts open the local webcam and save images/video to your workspace. Only run on systems where you permit camera use. - Network downloads: depth and YOLO models will be downloaded from Ultralytics/HuggingFace on first run. If you need offline operation, pre-download trusted model files and set YOLO_MODEL_PATH or place models under the skill's models/ folder. - Permissions: deployment docs suggest changing /dev/video* permissions or adding the user to the video group; prefer adding your user to the video group over chmod 666 where possible to limit risk. - Environment isolation: install Python deps in a virtualenv/container to avoid altering system packages and to contain network activity from model downloads. - Trust of models: model binaries are large and come from third parties; if you require high assurance, obtain models from sources you trust and verify checksums. Overall this skill is internally consistent with its stated purpose. If you need higher assurance, run it in an isolated VM or container and supply your own, pre-verified model files.

Like a lobster shell, security has layers — review code before you run it.

latestvk977ejxyxpmvdx1wpj18f0t4q1845cn7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments