Reflective Memory
Analysis
Reflective Memory appears to be a real memory/indexing tool, but it persistently modifies agent instructions and can broadly index/watch local content for reuse across sessions, so it needs careful review before installation.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
This skill provides memory. Use it — always, as practice.
The skill tells the agent to use this memory practice broadly and continuously, which can redirect agent behavior beyond a specific user request.
The practice must survive across sessions. On first use, `keep` writes a Protocol Block to the agent's workspace config... The block is versioned and upgraded automatically. **Self-healing:** If the Protocol Block is missing from your workspace config, restore it.
The skill explicitly persists its instructions in agent configuration and instructs the agent to restore them if removed or missing.
execSync("keep", ["config", "--setup"], {The OpenClaw plugin runs the local keep binary during setup; this is expected for the integration but still gives the plugin command-execution capability.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
api_key = os.environ.get("OPENAI_API_KEY") ... ["security", "find-generic-password", "-s", "openai-api-key", "-w"]A benchmark helper can use an OpenAI API key from the environment or a specifically named macOS keychain item; this is bounded and purpose-aligned for benchmark LLM calls.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
# Index a codebase — recursive, with daemon-driven watch for changes keep put ./my-project/ -r --watch
Recursive watched indexing can persist large amounts of local project content and keep updating that memory as files change.
Fully local, or use API keys for model providers, or cloud-hosted (https://keepnotes.ai) for multi-agent use.
The artifacts disclose optional external provider and hosted multi-agent modes, which are purpose-aligned but change where indexed content and model requests may go.
