Back to skill
v0.109.0

Reflective Memory

ReviewClawScan verdict for this skill. Analyzed May 1, 2026, 5:14 AM.

Analysis

Reflective Memory appears to be a real memory/indexing tool, but it persistently modifies agent instructions and can broadly index/watch local content for reuse across sessions, so it needs careful review before installation.

GuidanceInstall this only if you intentionally want a cross-session agent memory system. Before first use, decide exactly which folders and documents may be indexed, set ignore rules, avoid secrets and private repositories, check workspace config files for the keep protocol block, and prefer local models unless you are comfortable sending indexed content to external providers or the hosted service.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Agent Goal Hijack
SeverityMediumConfidenceHighStatusConcern
SKILL.md
This skill provides memory. Use it — always, as practice.

The skill tells the agent to use this memory practice broadly and continuously, which can redirect agent behavior beyond a specific user request.

User impactThe agent may spend time invoking memory tools or writing memories even when the user expected a simple, task-limited answer.
RecommendationOnly install if you want memory use to become part of routine agent behavior; otherwise require explicit user approval before memory searches, writes, or indexing.
Rogue Agents
SeverityHighConfidenceHighStatusConcern
SKILL.md
The practice must survive across sessions. On first use, `keep` writes a Protocol Block to the agent's workspace config... The block is versioned and upgraded automatically. **Self-healing:** If the Protocol Block is missing from your workspace config, restore it.

The skill explicitly persists its instructions in agent configuration and instructs the agent to restore them if removed or missing.

User impactThe skill's behavior can remain active across future sessions and may reappear in workspace rules unless the user deliberately removes or disables it.
RecommendationInspect workspace config files such as AGENTS.md or codex-setup.md after setup, confirm the protocol block is wanted, and require clear uninstall/disable steps before using it broadly.
Unexpected Code Execution
SeverityMediumConfidenceMediumStatusNote
keep/data/openclaw-plugin/src/index.ts
execSync("keep", ["config", "--setup"], {

The OpenClaw plugin runs the local keep binary during setup; this is expected for the integration but still gives the plugin command-execution capability.

User impactEnabling the plugin can run local setup commands and trigger configuration changes through the installed keep binary.
RecommendationInstall only from a trusted source, keep the binary path under user control, and review workspace changes made by setup.
Permission boundary

Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.

Identity and Privilege Abuse
SeverityMediumConfidenceHighStatusNote
bench/locomo/llm.py
api_key = os.environ.get("OPENAI_API_KEY") ... ["security", "find-generic-password", "-s", "openai-api-key", "-w"]

A benchmark helper can use an OpenAI API key from the environment or a specifically named macOS keychain item; this is bounded and purpose-aligned for benchmark LLM calls.

User impactIf the benchmark scripts are run, they may use the user's OpenAI account and incur costs or send benchmark prompts to OpenAI.
RecommendationRun benchmark scripts only with an intended API account, monitor usage, and avoid running them with production or shared credentials.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Memory and Context Poisoning
SeverityHighConfidenceHighStatusConcern
README.md
# Index a codebase — recursive, with daemon-driven watch for changes
keep put ./my-project/ -r --watch

Recursive watched indexing can persist large amounts of local project content and keep updating that memory as files change.

User impactPrivate files, secrets, internal code, or misleading indexed content could be stored, summarized, embedded, and later surfaced to the agent in unrelated tasks.
RecommendationIndex only explicit safe folders, configure ignore rules before using recursive or watch mode, avoid secrets and private repositories, and periodically review or prune the memory store.
Insecure Inter-Agent Communication
SeverityMediumConfidenceHighStatusNote
README.md
Fully local, or use API keys for model providers, or cloud-hosted (https://keepnotes.ai) for multi-agent use.

The artifacts disclose optional external provider and hosted multi-agent modes, which are purpose-aligned but change where indexed content and model requests may go.

User impactIf configured for hosted or provider-backed operation, indexed content or summaries may be processed outside the local machine.
RecommendationUse local models for sensitive material, and review provider/cloud configuration and data-handling policies before indexing private content.