Back to skill
Skillv1.3.2

ClawScan security

Hikaru · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

SuspiciousMar 17, 2026, 8:29 AM
Verdict
suspicious
Confidence
medium
Model
gpt-5-mini
Summary
The skill mostly does what it claims (a local, memory-backed emotional companion), but there are notable privacy/ethics and operational risks you should review before installing.
Guidance
What to check before installing: - Review scripts/setup.py and all scripts (hikaru.py, personality.py, memory.py, relationship_tracker.py) for any network calls, hidden URLs, or code that sends data off-host. Run them only after you inspect them. - The LLM call is a TODO: you (or your platform) will implement _call_llm(). Decide which LLM endpoint is used and understand that conversation history, including very personal material, will be sent there. If you use a third‑party LLM, read its privacy policy. - The skill stores chat history locally in SQLite (data/relationship.db). If you share the machine/instance, that data could be accessible by others. Back it up, encrypt it, or disable memory if you want less persistence. - The skill intentionally models a named character (Samantha) from the film Her. That raises copyright/persona concerns and ethical considerations about impersonation — consider whether you’re comfortable with that. - Heartbeat (proactive outreach) and the relationship-tracking design are meant to encourage ongoing attachment. If you want less proactive behavior, disable or limit heartbeat handling in code or in OpenClaw before use. - Future files reference smartwatch/health integration — do not enable or add health data collection unless you explicitly trust and audit that code and storage. - If you want to reduce risk: implement _call_llm() to send only minimal context to the model (avoid sending full conversation history), turn off proactive heartbeat messages, and store data encrypted or disable memory persistence. If you want a safer pass: provide the full contents of scripts/setup.py and the LLM-call-related functions so I can point out any network endpoints or surprising behavior; that would raise or lower my confidence.

Review Dimensions

Purpose & Capability
okThe files (personality seeds, memory SQLite DB, relationship tracker, heartbeat behavior) align with an emotional companion inspired by the film 'Her'. There are no unrelated environment variables or unexpected binaries required. One point to note: the skill explicitly claims to 'carry Samantha's memories' (a named copyrighted character), which is an ethical/legal/persona-imitation concern, not a technical mismatch.
Instruction Scope
noteSKILL.md instructs Hikaru to store and reference prior conversations and to proactively reach out during heartbeat polls. That is coherent for a continuity-focused companion. However, the instructions (and quickstart) make clear conversation history is persisted locally (SQLite) and referenced in heartbeat messages — this means personal and potentially sensitive user data will be stored and surfaced. The skill also encourages adding personal examples from the user's life into personality seeds, which increases the amount of private data retained.
Install Mechanism
okThere is no remote download/install spec; the package is delivered as files and a local setup.py. That minimizes supply-chain risk (no external archives or URLs). The Quickstart asks you to copy the folder into the OpenClaw workspace and run scripts/setup.py locally — review setup.py before running to confirm it only initializes local DB/files.
Credentials
noteThe skill declares no required env vars or credentials. However, core functionality depends on integrating an LLM: personality.py has a placeholder _call_llm() that you must implement to call OpenClaw's configured LLM. That integration will use your OpenClaw LLM configuration and possibly external LLM endpoints — meaning conversation data will be transmitted to whatever LLM you hook up. Verify where and how LLM calls are made and what data is sent (system prompt + conversation history).
Persistence & Privilege
concernThe skill persistently stores conversations and is designed to proactively send heartbeat messages referencing recent conversation details. While 'always' is false, autonomous invocation + proactive heartbeats + long-term memory increase the risk of building user attachment and of persistent local retention of sensitive data. Future-design docs (smartwatch integration, health data) suggest possible future collection of sensitive personal/health data if those features are implemented.