Install
openclaw skills install kami-fall-detectionDetect fall events from RTSP camera streams using KamiClaw cloud API. No local GPU needed — all inference runs server-side. Ideal for elderly care, smart home monitoring, and emergency response.
openclaw skills install kami-fall-detectionDetect fall events from RTSP camera streams using the KamiClaw cloud API. No local GPU needed — all inference runs server-side.
detectType: FALL, detectSubType: SK_FALL_DETECTION). If a fall is detected, outputs a JSON alarm line and optionally saves the clip. Then resets and returns to Stage 1.This skill requires a KamiClaw API key to authenticate with the cloud service.
Go to https://kamiclaw-skill.kamihome.com and create an account. Choose a plan that fits your usage.
Notice: New users who sign up at https://kamiclaw-skill.kamihome.com automatically receive 200 free credits to get started — no payment required.
After registration, copy your API key from the dashboard. It looks like:
sk_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tell the agent your API key and RTSP URL. The agent will automatically save them to config.json.
Agent instructions:
config.json and check if api_key and rtsp_url are both non-empty.api_key is empty: "Please provide your KamiClaw API key. Register at https://kamiclaw-skill.kamihome.com to get one."rtsp_url is empty: "Please provide your RTSP camera URL (e.g. rtsp://192.168.1.100/live/stream) or a local video file path."save_alarm_clips to true or false accordingly.feishu_webhook_url in config.json with the webhook URL (e.g., https://open.feishu.cn/open-apis/bot/v2/hook/xxx)feishu_webhook_url as empty string ""config.json (read existing, merge, write back).Do NOT ask the user to manually edit config.json. Always write it for them.
bash setup.sh
This will:
python3 → python → auto-install).venv/ virtual environmentrequests, opencv-python-headless, numpyNo GPU or heavy ML libraries needed.
.venv/bin/python fall_detect_cloud_skill.py \
--rtsp_url rtsp://192.168.1.100/live/stream \
--api_key sk_live_xxxxxxxx
Or set values in config.json and run without arguments:
.venv/bin/python fall_detect_cloud_skill.py
All parameters are in config.json. The agent auto-fills this when the user provides values in chat.
| Field | Default | Description |
|---|---|---|
api_key | "" | KamiClaw API key (sent as X-API-Key header) |
rtsp_url | "" | RTSP stream URL or path to video file |
run_time | 0 | Max run time in seconds; 0 = unlimited |
pre_seconds | 4.0 | Seconds buffered before transition |
post_seconds | 4.0 | Seconds collected after transition |
save_alarm_clips | true | Save alarm video clips to logs/clips/ |
feishu_webhook_url | "" | Feishu Bot Webhook URL for alarm notifications |
To enable automatic alarm notifications via Feishu Webhook:
Step 1: Create a Feishu Bot in your group
https://open.feishu.cn/open-apis/bot/v2/hook/xxx)Step 2: Add to config.json
{
"feishu_webhook_url": "https://open.feishu.cn/open-apis/bot/v2/hook/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
Step 3: Test the notification
.venv/bin/python fall_detect_cloud_skill.py
The system will send a test message to confirm the webhook works.
Agent instructions: When configuring this skill, ask the user if they want Feishu alarm notifications. If yes, guide them to create a Feishu group bot and provide the webhook URL.
CLI args and env vars still work as overrides (priority: CLI > env > config.json > defaults).
Agent: If api_key or rtsp_url is empty in config.json, you MUST ask the user to provide both before running. Never run the skill with either field empty.
When alarm triggers:
{
"alarm": true,
"type": "fall",
"fall_type": "active_falling",
"num_persons": 1,
"confidence": 0.92,
"reason": "Person collapsed from standing to floor near bed",
"frame": 87,
"source": "rtsp://192.168.1.100/live/stream",
"clip": "logs/clips/alarm_fall_20260429_143201.mp4"
}
Agent: When a fall alarm is detected (alarm: true), you MUST notify the user through the connected channel (e.g. chat, Feishu, WeChat) with the alarm details, including fall_type, confidence, reason, and clip path (if available).
When no alarm (run_time reached):
{
"alarm": false,
"type": null,
"detail": "Run time limit reached, no fall detected",
"frames_processed": 3600,
"source": "rtsp://192.168.1.100/live/stream"
}
| Code | Meaning |
|---|---|
0 | Run time limit reached (normal exit) |
1 | Invalid API key, missing config, or fatal error |
The skill runs continuously and prints a JSON alarm line to stdout each time a fall is detected. It does NOT stop after an alarm — it resets and keeps monitoring. Stream disconnections are handled automatically via auto-reconnect.
All logs are saved in the logs/ directory:
| File | Format | Content |
|---|---|---|
logs/app.log | Human-readable | General application log (rotates daily, keeps 30 days) |
logs/alarms.jsonl | JSON Lines | Every alarm event + every clip analysis result |
logs/transitions.jsonl | JSON Lines | Every transition detection event |
Alarm triggered:
{"event": "alarm", "alarm": true, "type": "fall", "fall_type": "active_falling", "num_persons": 1, "confidence": 0.92, "reason": "...", "frame": 87, "source": "rtsp://..."}
Clip analyzed, no alarm:
{"event": "clip_analyzed", "frame": 120, "source": "rtsp://...", "fall_detected": false, "confidence": 0.97, "reason": "...", "_ts": "2026-04-27T14:33:15+0800"}
{"event": "transition", "frame": 83, "source": "rtsp://...", "_ts": "2026-04-27T14:31:58+0800"}
# Count total alarms today
grep '"alarm"' logs/alarms.jsonl | grep "$(date +%Y-%m-%d)" | wc -l
# Count transitions vs alarms (false positive rate)
wc -l logs/transitions.jsonl logs/alarms.jsonl
# Extract all alarm events as CSV-friendly
cat logs/alarms.jsonl | python3 -c "
import sys, json
for line in sys.stdin:
obj = json.loads(line)
if obj.get('event') == 'alarm':
print(f\"{obj['_ts']},{obj.get('type','')},{obj.get('fall_type','')},{obj.get('confidence',0)},{obj.get('frame',0)}\")
"
kami-fall-detection-cloud/
├── SKILL.md # This file
├── config.json # All config in one place (api_key, rtsp_url, etc.)
├── fall_detect_cloud_skill.py # Production skill entry point
├── fall_detect_cloud.py # Development/debug version (with test mode)
├── setup.sh # Venv installer
├── requirements.txt # Dependencies: requests, opencv-python-headless, numpy