Install
openclaw skills install luci-memoriesSearch personal video memory — media content (videos, images, keyframes, transcripts) and portrait data (traits, events, relationships, speeches). Use when the user asks about their videos, what happened, what was said, who they know, or their personality.
openclaw skills install luci-memoriesRequires an MEMORIES_AI_KEY. On first use, if no key is found, the script will error and ask for one.
When the user provides their key, save it to {baseDir}/.env:
MEMORIES_AI_KEY=sk-their-key-here
After that, everything just works — the key is loaded automatically from .env on every run.
All timestamps in Luci-memory are stored and returned in UTC. Skill output labels them with " UTC" so this is unambiguous. The user's local timezone is in USER.md (e.g. Asia/Shanghai). You are responsible for converting in both directions:
Reading results. When presenting captured_time to the user, convert from UTC to the user's local timezone. Never show raw UTC labels to the user.
Writing filters. --after and --before are interpreted as UTC. If the user says relative dates like "yesterday" or "this morning", convert their local-time intent to a UTC range before passing the dates.
Example (user in Asia/Shanghai, UTC+8, asks "what did I do yesterday" on 2026-04-08):
--after 2026-04-06T16:00:00 --before 2026-04-07T16:00:00If USER.md has no timezone and the user uses relative dates, ask them first.
Unified search across personal media and portrait data from the Luci-memory API.
The user's videos go through two processing pipelines that produce different data:
search_video, query_audio, etc.)traits, events, speeches, etc.)--video-ids to get complete transcripts. You can also use search_audio / search_visual scoped to those video IDs to find specific moments — use both flexibly as needed.Note: --after / --before are UTC. Convert from the user's local timezone first (see Timezone section above).
Image and keyframe results include bucket and blob fields. To send an image in chat, fetch the bytes from the Luci-memory image proxy endpoint, then forward via OpenClaw:
curl -sL -o /path/to/workspace/image.jpg \
"https://skills.memories.ai/luci-memory/personal/image?bucket=<bucket>&blob=<blob>"
openclaw message send --channel <channel> --target <chat_id> --media /path/to/workspace/image.jpg --message "caption"
rm /path/to/workspace/image.jpg
⚠️ Always quote the URL — it contains & and the blob may have / characters.
⚠️ Do NOT use /tmp or paths outside the workspace — some tools block external paths.
⚠️ The image tool only analyzes images — it cannot send them to the user. Use openclaw message send --media instead.
bash {baseDir}/run.sh --query "cooking in kitchen" --type search_video bash {baseDir}/run.sh --query "what did I do" --type search_video --location "Heze" bash {baseDir}/run.sh --query "meeting" --type search_video --after 2025-12-01 --before 2026-01-01 bash {baseDir}/run.sh --type query_video bash {baseDir}/run.sh --type query_video --location "Suzhou" --after 2025-12-01
bash {baseDir}/run.sh --query "sunset" --type search_image bash {baseDir}/run.sh --query "food" --type search_image --location "Beijing" bash {baseDir}/run.sh --type query_image
bash {baseDir}/run.sh --query "talking about work" --type search_audio bash {baseDir}/run.sh --query "budget" --type search_audio --video-ids VI123,VI456 bash {baseDir}/run.sh --type query_audio --video-ids VI123,VI456
bash {baseDir}/run.sh --query "walking in park" --type search_visual bash {baseDir}/run.sh --type query_visual --video-ids VI123,VI456
bash {baseDir}/run.sh --query "person waving" --type search_keyframe bash {baseDir}/run.sh --type query_keyframe --video-ids VI123,VI456
bash {baseDir}/run.sh --type traits bash {baseDir}/run.sh --type traits --person "Alice" bash {baseDir}/run.sh --query "outdoor activities" --type search_traits
bash {baseDir}/run.sh --type events bash {baseDir}/run.sh --type events --person "Alice" bash {baseDir}/run.sh --type events --person "Alice,Bob" bash {baseDir}/run.sh --type events --after 2025-12-01 --before 2026-01-01 bash {baseDir}/run.sh --query "cooking in kitchen" --type search_events bash {baseDir}/run.sh --query "meeting" --type search_events --person "Bob" --after 2025-12-01
bash {baseDir}/run.sh --type relationships bash {baseDir}/run.sh --type relationships --person "Alice"
bash {baseDir}/run.sh --type speeches bash {baseDir}/run.sh --type speeches --person "Alice" bash {baseDir}/run.sh --type speeches --event-ids EVT123,EVT456 bash {baseDir}/run.sh --type speeches --person "Alice" --event-ids EVT123
## Parameters
| Flag | Short | Description |
|------|-------|-------------|
| `--query` | `-q` | Search term (required for `search_*` types) |
| `--type` | `-t` | Operation type (default: `search_video`) |
| `--top-k` | `-k` | Max results (default: 10) |
| `--location` | `-l` | Filter by location name, geocoded via Google Maps (e.g. "Suzhou") |
| `--after` | | Only results after this date (`YYYY-MM-DD` or `YYYY-MM-DDTHH:MM:SS`) |
| `--before` | | Only results before this date |
| `--video-ids` | | Comma-separated video IDs (media types) |
| `--person` | `-p` | Filter by person name(s), comma-separated (portrait types). Use `user` for self. |
| `--event-ids` | | Comma-separated event IDs (portrait types) |
## Image bytes
Image and keyframe results return `bucket` and `blob` (no signed URLs). To get the actual image bytes, hit the proxy endpoint — see "Returning Images/Keyframes to User" above. The endpoint streams the JPEG bytes directly with no expiration or auth on the client side.
## Types reference
### Media search types (require `--query`)
| Type | What it searches | Supports |
|------|-----------------|----------|
| `search_video` | Video summaries by meaning | `--location`, `--after/before` |
| `search_image` | Image descriptions by meaning | `--location`, `--after/before` |
| `search_audio` | Audio transcripts by meaning | `--video-ids`, `--after/before` |
| `search_visual` | Visual transcripts by meaning | `--video-ids`, `--after/before` |
| `search_keyframe` | Keyframe images by meaning | `--video-ids`, `--after/before` |
### Media query types (list/filter)
| Type | What it returns | Requires | Supports |
|------|----------------|----------|----------|
| `query_video` | Recent videos | — | `--location`, `--after/before` |
| `query_image` | Recent images | — | `--location`, `--after/before` |
| `query_audio` | Audio transcripts for videos | `--video-ids` | `--after/before` |
| `query_visual` | Visual transcripts for videos | `--video-ids` | `--after/before` |
| `query_keyframe` | Keyframes for videos | `--video-ids` | `--after/before` |
### Portrait query types (list/filter)
| Type | What it returns | Supports |
|------|----------------|----------|
| `traits` | Personality traits, hobbies, interests | `--person` |
| `events` | Events with participants | `--person`, `--after/before`, `--event-ids` |
| `relationships` | How user relates to people | `--person` |
| `speeches` | What people said | `--person`, `--event-ids` |
### Portrait search types (semantic, require `--query`)
| Type | What it searches | Supports |
|------|-----------------|----------|
| `search_events` | Events by meaning | `--person`, `--after/before` |
| `search_traits` | Traits by meaning | — |