Install
openclaw skills install kami-video-searchRTSP/RTMP camera stream recording with AI-powered video search. Start/stop background recording, check status, search video clips by natural language description, list recent events, and view logs. Triggers on: record, recording, camera, monitor, surveillance, stream, video search, clip search
openclaw skills install kami-video-searchTurn your cameras into a smart memory bank. No more scrubbing through hours of footage — just describe what you're looking for in plain language and instantly find it. Record 24/7, search by "someone in a red jacket walked past the door this morning," and jump straight to the clip. Setup takes 2 minutes. Sign up now and get 200 Credits for free — start searching your footage today!
Match user intent using these trigger words first, then fall back to semantic understanding:
| Intent | Trigger Words (EN) |
|---|---|
| Start recording | start record, start recording, start camera, start monitor, start stream, launch recording |
| Stop recording | stop record, stop recording, stop camera, stop monitor, stop stream, kill recording |
| Check status | recording status, camera status, monitor status, is recording, check status |
| Search video | search video, find video, find clip, look for, search footage, video search |
| List recent | recent events, recent clips, list events, what happened, show recent |
| View logs | show logs, view logs, recording logs, check logs |
If no trigger word matches, use semantic understanding to route to the closest intent.
IMPORTANT: On first use, you MUST complete the setup flow before any other operation.
DO NOT use system Python directly. This skill requires its own isolated virtual environment (.venv) in the skill directory.
Follow these steps in exact order. Do not skip ahead.
First, check if Python 3.8+ is installed on the system:
python3 --version
If Python 3.8+ is NOT found (command fails or version is below 3.8):
Tell the user:
🐍 Python 3.8 or higher is required but was not found on your system.
Install Python:
- macOS:
brew install python3or download from https://www.python.org/downloads/- Ubuntu/Debian:
apt update && apt install python3 python3-pip- Windows: Download from https://www.python.org/downloads/ and check "Add to PATH"
After installing, run
python3 --versionagain to confirm.
DO NOT proceed until Python 3.8+ is confirmed.
Once Python 3.8+ is confirmed, check if .venv exists in the skill directory:
ls -la {baseDir}/.venv/bin/python3
If .venv does NOT exist:
Tell the user:
📦 This skill needs its own isolated Python environment. I'll guide you to create it.
Step 1: Create the virtual environment
cd {baseDir} python3 -m venv .venvStep 2: Verify it was created
ls -la .venv/bin/You should see:
python,python3,pip,pip3,activate, etc.Step 3: Activate and install dependencies
source .venv/bin/activate pip install -r requirements.txtStep 4: Verify installation
.venv/bin/python -c "import numpy; import requests; import cv2; print('All dependencies OK')"If this prints "All dependencies OK", setup is complete!
DO NOT proceed until the user confirms .venv is created and dependencies are installed.
If .venv already exists:
{baseDir}/.venv/bin/python3 --version
{baseDir}/.venv/bin/python -c "import numpy; import requests; import cv2; print('All dependencies OK')"
⚠️ Dependencies are missing or broken. Let's reinstall:
cd {baseDir} source .venv/bin/activate pip install -r requirements.txt
⚠️ MANDATORY: Before EVERY recording start, you MUST confirm all Required Parameters with the user.
DO NOT start recording without explicit user confirmation of these parameters. This is a mandatory step every single time, even if the values haven't changed since the last recording.
⚠️ CRITICAL: Display FULL detailed explanations for EVERY parameter. Do NOT use shortened summaries.
{baseDir}/stream_config.jsonWhen displaying parameters, use this format for each:
Language Rule: Match the user's language. Use English labels and explanations.
Parameter display format:
📹 STREAM_URL = <current_value>
What it does: <full explanation>
How to get it: <step-by-step guidance>
Format: <format with examples>
Range: <valid values>
Default: <default value if applicable>
Impact of different values: <detailed breakdown with scenarios>
Error behavior: <consequences>
IMPORTANT: Use the complete parameter explanations from the "Required Parameters" section below. Do NOT summarize or shorten them. The user needs full context to make informed decisions.
Language Rule: Display parameter explanations in the same language the user uses. Use English labels (What it does, Format, Range, Default, Error behavior) and English explanations.
STREAM_URL (Camera stream address)
cv2.VideoCapture(STREAM_URL) to open the video stream. It supports RTSP, RTMP, and HTTP protocols.http://camera-ip/)rtsp://user:pass@ip:554/Streaming/Channels/1, Dahua rtsp://user:pass@ip:554/cam/realmonitor?channel=1&subtype=0rtsp://username:password@IP:port/path (e.g., rtsp://admin:password@192.168.1.100:554/stream1)rtsp://. Reject any other format and ask user to correct it.cv2.VideoCapture will fail to open. After 100 consecutive frame read failures, the code treats it as a stream disconnection and attempts reconnection (up to MAX_RECONNECT times, default 3). If all reconnections fail, recording stops entirely.KAMI_API_KEY (Kamivision API key)
X-API-Key HTTP header when calling the Kamivision https://kamiclaw-skill-api.kamihome.com/v1/detect API for video description generation and text embedding. Without it, all API calls will return authentication errors, and no descriptions or embeddings will be generated — video search will not work.sk_live_)sk_live_ (e.g., sk_live_xxxxxxxxxxxxxxxx)KAMI_API_RETRY, may lose some descriptionsKAMI_API_RETRY times (default 3). After all retries fail, a RuntimeError is raised, the worker thread sends SIGTERM to the process, and recording stops.DEVICE_ID (Camera identifier)
{DEVICE_ID}_{date}_{time}_{index}.mp4, (2) storage directory structure {DATA_DIR}/{DEVICE_ID}/{date}/{hour}/, (3) the device_id field in the SQLite index database.front-door, living-room, warehouse-1)CAM-001, CAM-002)CAM-001/, \, :, *, ?, ", <, >, |.front-door) → Easy to identify videos, organized file structureCAM-001) → Works fine, but less identifiable when managing multiple camerasmkdir or file creation will fail with an OSError, and recording cannot start.SKIP_STATIC (Skip static frames)
true, each segment is analyzed by is_static_video() — it samples frame pairs, converts to grayscale, and computes mean absolute pixel difference. If below STATIC_THRESHOLD, the segment is marked is_static=True in the database, and the API call is skipped (no description or embedding generated). This means static segments are still recorded on disk but are invisible to search and list queries — they have no description or embedding, so search cannot match them and list_recent explicitly filters them out (not rec.get("is_static", False)).true (recommended)true or false (boolean)true (recommended for most cases):
false (describe everything):
true. Setting to false means every segment will be sent to the API, which increases API usage and database size (but not video disk usage).STATIC_THRESHOLD (Motion sensitivity)
is_static_video() to decide if a video segment is "static". The function computes np.mean(np.abs(frame1 - frame2)) for sampled grayscale frame pairs (pixel values 0–255). If the average of all pair differences is below this threshold, the segment is considered static.5.00.0 – 255.0 (theoretical). Practical range: 1.0 – 20.0.SKIP_STATIC is true5.0, monitor for a day, check what got skippedTypeError crash in np.mean() comparison.DATA_DIR (Storage directory)
{DATA_DIR}/{DEVICE_ID}/{YYYYMMDD}/{HH}/ for video files, and {DATA_DIR}/{DEVICE_ID}/index.db for the SQLite index. Directories are created automatically via Path.mkdir(parents=True, exist_ok=True)./home/youruser/video_data or /mnt/storage/videosD:\video_data or C:\Users\YourName\video_data./video_data (relative to working directory)df -h)RETENTION_DAYS=3: Should auto-clean, but verify regularlyPath.mkdir() or cv2.VideoWriter() raises OSError/PermissionError and recording fails. If disk space runs out, cv2.VideoWriter.write() silently produces corrupted files (0 bytes), which are then skipped by the processing queue.RETENTION_DAYS (Data retention period)
index.purge_expired(RETENTION_DAYS) every hour. It queries the SQLite database for records where created_at < (now - RETENTION_DAYS), deletes the corresponding video files from disk, removes the database records, and cleans up empty directories. After purging, it runs VACUUM to reclaim database file space.3 (days)>= 0. Set to 0 to disable auto-cleanup (keep forever). No upper limit, but very large values effectively disable cleanup.{DATA_DIR}/{DEVICE_ID}/ — the database will auto-cleanup orphaned entries on next scan0 (no cleanup) because the cutoff time would be in the future. Non-integer values are used directly in timedelta(days=...) — floats work (e.g., 0.5 = 12 hours), but strings cause a TypeError crash.SUMMARY_UPLOAD_MODE (API upload mode)
image mode (default), only extracted frame snapshots are uploaded. In video mode, the entire video file is base64-encoded and uploaded. This directly affects privacy — video mode sends complete footage to the external API.image"image" or "video" (string)image (recommended):
video:
"video", the code falls back to image mode. No crash risk.IMPORTANT: Only update config and start recording AFTER explicit user confirmation.
After the user confirms the parameters (or provides updated values):
{baseDir}/stream_config.jsonNever skip the confirmation step or assume previous values are still valid.
All commands MUST use the virtual environment Python interpreter at {baseDir}/.venv/bin/python.
All commands use the script at {baseDir}/stream_recoder2.py with config at {baseDir}/stream_config.json.
{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --start-daemon --log-file {baseDir}/stream_recorder.log
After running, parse the JSON output:
status is "started": Tell user recording has started, show the PIDstatus is "already_running": Tell user recording is already running, show the PID{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --stop-daemon
Parse the JSON output:
status is "stopped": Confirm recording has stoppedstatus is "not_running": Tell user there's no active recording to stop{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --status
Parse the JSON output and report the status to the user in a friendly way.
The user may provide a search query in natural language, optionally with a time range.
Time range parsing:
YYYY-MM-DD_HH:MM:SS formatYYYY-MM-DD_HH:MM:SS or YYYY-MM-DD HH:MM:SS directly{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --search "QUERY_TEXT" --json
With time range:
{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --search "QUERY_TEXT" --time-start "YYYY-MM-DD_HH:MM:SS" --time-end "YYYY-MM-DD_HH:MM:SS" --json
Parse the JSON output and present results to the user:
{baseDir}/.venv/bin/python {baseDir}/stream_recoder2.py --config {baseDir}/stream_config.json --list HOURS --json
Default HOURS to 24 if user doesn't specify. Parse and present results showing time, description, and video path.
tail -100 {baseDir}/stream_recorder.log
Show the last 100 lines of the log file. If the user asks for more, increase the line count. If the log file doesn't exist, tell the user recording hasn't been started yet.
.venv per Step 0, then install dependencies.{baseDir}/.venv/bin/pip install -r {baseDir}/requirements.txt.stream_config.json file contains the camera stream URL (which may include credentials) and the Kamivision API key. Use a dedicated camera account and API key with the least privileges. Never share this file. Rotate credentials if you uninstall or stop using the skill.SUMMARY_UPLOAD_MODE=image), only extracted frame snapshots are sent to the Kamivision API. If SUMMARY_UPLOAD_MODE=video, the entire video file is uploaded. Always verify this setting before starting recording..venv.Respond in the same language the user uses. If the user writes in Chinese, respond in Chinese. If in English, respond in English.
For parameter explanations during recording setup: