luci-memory

v1.0.6

Search personal video memory — media content (videos, images, keyframes, transcripts) and portrait data (traits, events, relationships, speeches). Use when t...

0· 277·0 current·0 all-time
byZhuorui Yu@gimlettt

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for gimlettt/luci-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "luci-memory" (gimlettt/luci-memory) from ClawHub.
Skill page: https://clawhub.ai/gimlettt/luci-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: MEMORIES_AI_KEY
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install luci-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install luci-memory
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (personal video/media/portrait search) aligns with the actual behavior: the code calls memories.ai endpoints and requires a MEMORIES_AI_KEY. Required binary (python3) and the single env var are appropriate for this client tool.
Instruction Scope
SKILL.md and scripts instruct the agent to store the MEMORIES_AI_KEY in a local .env, convert user-local times using USER.md, download signed-media URLs to the workspace and send them via the OpenClaw CLI. These actions are expected for a media-retrieval skill, but note: SKILL.md references USER.md (timezone info) which is not declared in requires.config paths (the agent must have or obtain USER.md). The instructions also describe downloading user media into the workspace and sending it via the agent; this is normal but has privacy implications.
Install Mechanism
No external install/downloads are performed; the skill is provided as Python scripts included in the bundle. There are no third-party package installs or remote archives referenced by the install spec.
Credentials
Only MEMORIES_AI_KEY is required and is the declared primary credential. The code reads the key from the environment or a local .env file next to the skill — this is proportional to contacting the memories.ai/userinfo APIs. Users should note the key will be stored on disk if they follow the SKILL.md instructions.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It does network calls and writes/reads a local .env (its own config), but it does not modify other skills or system-wide settings.
Assessment
This skill appears to do what it says: query your personal media and portrait data on memories.ai. Before installing, consider: (1) it requires your MEMORIES_AI_KEY and the SKILL.md asks you to save it to a local .env file in the skill workspace — that stores the key on disk; only do this if you trust the environment and the memories.ai service, (2) the skill will call memories.ai endpoints (including a userinfo endpoint that receives your key) and may download signed URLs (images/keyframes) into the workspace temporarily — those files will be briefly written to disk and sent via the agent's messaging CLI, (3) SKILL.md references USER.md for timezone conversion; ensure your agent has that file or be prepared for the skill to ask you for timezone info, and (4) verify you trust the memories.ai domains listed in the code (skills.memories.ai and mavi-backend.memories.ai). If any of these are unacceptable (disk-stored keys, media downloads, network calls to those domains), do not install or run the skill. Otherwise the skill is internally coherent and fits its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython3
EnvMEMORIES_AI_KEY
Primary envMEMORIES_AI_KEY
latestvk977124q3ewg0qaqnhmkfwdd4184v9hq
277downloads
0stars
5versions
Updated 1w ago
v1.0.6
MIT-0

luci-memory

Setup

Requires an MEMORIES_AI_KEY. On first use, if no key is found, the script will error and ask for one.

When the user provides their key, save it to {baseDir}/.env:

MEMORIES_AI_KEY=sk-their-key-here

After that, everything just works — the key is loaded automatically from .env on every run.

Timezone

All timestamps in Luci-memory are stored and returned in UTC. Skill output labels them with " UTC" so this is unambiguous. The user's local timezone is in USER.md (e.g. Asia/Shanghai). You are responsible for converting in both directions:

  1. Reading results. When presenting captured_time to the user, convert from UTC to the user's local timezone. Never show raw UTC labels to the user.

  2. Writing filters. --after and --before are interpreted as UTC. If the user says relative dates like "yesterday" or "this morning", convert their local-time intent to a UTC range before passing the dates.

Example (user in Asia/Shanghai, UTC+8, asks "what did I do yesterday" on 2026-04-08):

  • Local intent: 2026-04-07 00:00 → 2026-04-08 00:00 (Asia/Shanghai)
  • UTC range to pass: --after 2026-04-06T16:00:00 --before 2026-04-07T16:00:00

If USER.md has no timezone and the user uses relative dates, ask them first.

Unified search across personal media and portrait data from the Luci-memory API.

The user's videos go through two processing pipelines that produce different data:

  • Media content (personal): video summaries, audio transcripts, visual transcripts, keyframes, images
  • People & knowledge (portrait): traits, events with participants, relationships, speeches attributed to speakers

When to use

  • User asks to find or search videos, images, or photos
  • User asks what was said or shown in a video
  • User asks to list recent videos or images
  • User asks about media at a specific location or time
  • User asks about traits, personality, hobbies, interests
  • User asks what events happened, or events involving specific people
  • User asks about relationships between people
  • User asks about what someone said
  • User mentions "luci memory" or wants to use their video memory

Choosing the right type

  • About content (what happened, what was said/shown, find media) → use media types (search_video, query_audio, etc.)
  • About people (who, traits, relationships, named individuals) → use portrait types (traits, events, speeches, etc.)
  • Ambiguous questions like "What happened with Alice last week?" → use both: portrait types to identify the person and events, media types to get detailed video content and transcripts.
  • Person name fallback: Portrait data only exists for people who have appeared in at least 5 videos AND been named by the user in the app. If a portrait query by person name returns no results, fall back to media types — search video summaries, audio transcripts, or visual transcripts for mentions of that name instead.

Relevance guidelines

  • There is no rerank process — retrieved results may contain items irrelevant to the user's actual intent.
  • Always verify relevance: after receiving results, check each item against the user's original query. Only present results that are relevant. Discard anything that doesn't match.
  • Refine and retry: if results seem off or too broad, retry with a more specific query, narrower date range, or additional filters. Do not just dump low-quality results to the user.
  • Ask the user: if the query is ambiguous or too vague to produce good results, ask the user for more specific conditions before searching. It is better to clarify than to return noise. Do this no more than 1 time.

No hallucination — ground every claim in retrieved data

  • Never fabricate what the user did, said, or experienced. Every detail in your answer must come from actual search results.
  • Multi-step retrieval: for questions like "what did I do and say at XXX", do NOT answer from a single broad search. Follow this pattern:
    1. Locate: search broadly (search_video, search_events) to find relevant video_ids or event_ids.
    2. Retrieve: once you have IDs, prefer query_audio / query_visual with --video-ids to get complete transcripts. You can also use search_audio / search_visual scoped to those video IDs to find specific moments — use both flexibly as needed.
  • Do not stuff keywords into search queries. Each semantic search query should be a short, coherent natural-language query, rather than stacking multiple possible words. You are encouraged to try different ones and query various times though.
  • If data is missing, say so. Do not fill gaps with plausible-sounding guesses. "I couldn't find transcript data for that video" is always better than making something up.

How to invoke

Note: --after / --before are UTC. Convert from the user's local timezone first (see Timezone section above).

Returning Images/Keyframes to User

When search results include signed URLs (keyframes, images), follow this pipeline to send them in chat:

  1. Download the signed URL to the workspace:
    curl -sL -o /path/to/workspace/image.jpg "<signed_url>"
    
2. Send via OpenClaw message CLI:
openclaw message send --channel <channel> --target <chat_id> --media /path/to/workspace/image.jpg --message "caption"
3. Cleanup the file after sending:
rm /path/to/workspace/image.jpg
⚠️ Signed URLs expire after ~1 hour. Download promptly.
⚠️ Do NOT use /tmp or paths outside the workspace — some tools block external paths.
⚠️ The image tool only analyzes images — it cannot send them to the user. Use openclaw message send --media instead.

# ============ Media content (personal) ============

# --- Video ---
bash {baseDir}/run.sh --query "cooking in kitchen" --type search_video
bash {baseDir}/run.sh --query "what did I do" --type search_video --location "Heze"
bash {baseDir}/run.sh --query "meeting" --type search_video --after 2025-12-01 --before 2026-01-01
bash {baseDir}/run.sh --type query_video
bash {baseDir}/run.sh --type query_video --location "Suzhou" --after 2025-12-01

# --- Image ---
bash {baseDir}/run.sh --query "sunset" --type search_image
bash {baseDir}/run.sh --query "food" --type search_image --location "Beijing"
bash {baseDir}/run.sh --type query_image

# --- Audio Transcripts (what was said) ---
bash {baseDir}/run.sh --query "talking about work" --type search_audio
bash {baseDir}/run.sh --query "budget" --type search_audio --video-ids VI123,VI456
bash {baseDir}/run.sh --type query_audio --video-ids VI123,VI456

# --- Visual Transcripts (what was shown) ---
bash {baseDir}/run.sh --query "walking in park" --type search_visual
bash {baseDir}/run.sh --type query_visual --video-ids VI123,VI456

# --- Keyframes ---
bash {baseDir}/run.sh --query "person waving" --type search_keyframe
bash {baseDir}/run.sh --type query_keyframe --video-ids VI123,VI456

# ============ People & knowledge (portrait) ============

# --- Traits ---
bash {baseDir}/run.sh --type traits
bash {baseDir}/run.sh --type traits --person "Alice"
bash {baseDir}/run.sh --query "outdoor activities" --type search_traits

# --- Events ---
bash {baseDir}/run.sh --type events
bash {baseDir}/run.sh --type events --person "Alice"
bash {baseDir}/run.sh --type events --person "Alice,Bob"
bash {baseDir}/run.sh --type events --after 2025-12-01 --before 2026-01-01
bash {baseDir}/run.sh --query "cooking in kitchen" --type search_events
bash {baseDir}/run.sh --query "meeting" --type search_events --person "Bob" --after 2025-12-01

# --- Relationships ---
bash {baseDir}/run.sh --type relationships
bash {baseDir}/run.sh --type relationships --person "Alice"

# --- Speeches ---
bash {baseDir}/run.sh --type speeches
bash {baseDir}/run.sh --type speeches --person "Alice"
bash {baseDir}/run.sh --type speeches --event-ids EVT123,EVT456
bash {baseDir}/run.sh --type speeches --person "Alice" --event-ids EVT123

Parameters

FlagShortDescription
--query-qSearch term (required for search_* types)
--type-tOperation type (default: search_video)
--top-k-kMax results (default: 10)
--location-lFilter by location name, geocoded via Google Maps (e.g. "Suzhou")
--afterOnly results after this date (YYYY-MM-DD or YYYY-MM-DDTHH:MM:SS)
--beforeOnly results before this date
--video-idsComma-separated video IDs (media types)
--person-pFilter by person name(s), comma-separated (portrait types). Use user for self.
--event-idsComma-separated event IDs (portrait types)

Signed URLs

Image and keyframe results include a signed_url field — a temporary (1-hour) direct link to view/download from Google Cloud Storage. No authentication needed, but they expire after 1 hour.

Types reference

Media search types (require --query)

TypeWhat it searchesSupports
search_videoVideo summaries by meaning--location, --after/before
search_imageImage descriptions by meaning--location, --after/before
search_audioAudio transcripts by meaning--video-ids, --after/before
search_visualVisual transcripts by meaning--video-ids, --after/before
search_keyframeKeyframe images by meaning--video-ids, --after/before

Media query types (list/filter)

TypeWhat it returnsRequiresSupports
query_videoRecent videos--location, --after/before
query_imageRecent images--location, --after/before
query_audioAudio transcripts for videos--video-ids--after/before
query_visualVisual transcripts for videos--video-ids--after/before
query_keyframeKeyframes for videos--video-ids--after/before

Portrait query types (list/filter)

TypeWhat it returnsSupports
traitsPersonality traits, hobbies, interests--person
eventsEvents with participants--person, --after/before, --event-ids
relationshipsHow user relates to people--person
speechesWhat people said--person, --event-ids

Portrait search types (semantic, require --query)

TypeWhat it searchesSupports
search_eventsEvents by meaning--person, --after/before
search_traitsTraits by meaning

Comments

Loading comments...