Neural Memory Enhanced

WarnAudited by ClawScan on May 10, 2026.

Overview

This skill is openly a persistent memory system, but it proactively stores and reinjects conversation context across sessions without clear user controls, and its install/provenance metadata is inconsistent.

Install only if you want an agent with persistent cross-session memory. Before using it, decide what may be saved, keep separate memory brains for different projects, do not store secrets, and verify the external package source because the install metadata is inconsistent.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private or outdated conversation details could be saved and later influence the agent’s behavior in unrelated tasks.

Why it was flagged

The skill directs the agent to persist and later reinject context across sessions, including facts, decisions, errors, and preferences, but the visible instructions do not define user approval, sensitivity exclusions, project boundaries, or retention controls.

Skill content
Use PROACTIVELY when: ... Starting a new task — inject relevant context from memory ... After making decisions or encountering errors — store for future reference
Recommendation

Use a dedicated brain per project, avoid storing secrets or sensitive personal data, review memory contents regularly, and require explicit user approval before saving or recalling sensitive context.

What this means

Sensitive details from normal conversations may be retained longer than the user expects and reused later.

Why it was flagged

The skill encourages automatic extraction from conversation text into persistent memory. The artifact does not show safeguards for redacting secrets, confirming user intent, or limiting what conversation content may be stored.

Skill content
At Session End
7. Call `nmem_auto` with action="process" on important conversation segments
8. This auto-extracts facts, decisions, errors, and TODOs
Recommendation

Configure the agent to ask before auto-capturing conversation segments, and establish deletion/redaction practices for stored memories.

What this means

A user or installer could fetch a different package source than expected, making it harder to know what code is actually being run.

Why it was flagged

The install metadata mixes a pip-labeled workflow with a Node install kind, while the setup text tells users to run `pip install neural-memory`. This creates provenance and installation ambiguity for an external package that implements the memory tools.

Skill content
"install":[{"id":"pip","kind":"node","package":"neural-memory","bins":["nmem"],"label":"pip install neural-memory"}]
Recommendation

Verify the intended package repository and package manager before installing, prefer pinned versions, and review the external package if possible.