MindGardener
ReviewAudited by ClawScan on May 10, 2026.
Overview
MindGardener is coherent as a memory tool, but it deserves review because it can persist and reinject conversation-derived memory and its default/documented workflows may send logs to external LLM providers despite local/offline framing.
Install only if you want durable agent memory. Before enabling cron, sync, or auto-injection, decide which logs may be processed, whether external LLM providers are acceptable, and whether generated memory should be reviewed before future sessions use it.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Conversation history, personal facts, or long-term memory may be sent to a configured LLM provider unless the user explicitly chooses a local provider such as Ollama.
The documented LLM-backed commands process conversation logs and memory content, and the shown/default provider is Google. This is purpose-aligned, but it creates an external data-flow for sensitive local memory despite local-first/offline messaging.
MindGardener reads your agent's conversation logs ... Only 3 commands need an LLM (`extract`, `surprise`, `consolidate`) ... provider: google
Use a local provider for private memory, or review provider settings, retention policies, and API keys before running extract, surprise, or consolidate on sensitive logs.
Sensitive facts or mistaken summaries can become part of future agent context and may spread across agents if sync is enabled.
The skill deliberately persists extracted memories and injects generated recall context into future sessions, including shared multi-agent memory. This is core functionality, but it can preserve sensitive or incorrect information and influence later agent behavior.
Auto-injection — context ready at session start ... Multi-agent sync — merge per-agent memories to shared ... garden inject --output RECALL-CONTEXT.md
Inspect generated memory files, use provenance/confidence fields, disable auto-injection or sync until reviewed, and define exclusions for private or untrusted logs.
Once configured, the agent may keep updating and using memory on a schedule without a fresh prompt each time.
The recurring cron and bootstrap instructions are disclosed and user-directed, but they create ongoing autonomous memory updates and context generation after setup.
Then add to your agent's nightly cron or BOOTSTRAP.md: garden extract && garden surprise && garden consolidate ... garden inject --output RECALL-CONTEXT.md
Only add the cron/bootstrap steps after reviewing the commands, and start with manual runs or dry-runs where available.
A user could install or trust the wrong package/project if following the nested documentation.
The bundle includes a secondary/older Engram skill that references a different package and homepage than the top-level MindGardener package, creating provenance and package-identity ambiguity.
name: engram ... homepage: https://github.com/maweding/agent-engram ... pip install agent-engram
Verify the intended package, repository, and version before installing; prefer the top-level documented `mindgardener` package unless you intentionally want `agent-engram`.
Cloud-provider keys placed in the environment can authorize paid requests and expose memory-processing data to that provider.
Provider API keys are expected for cloud LLM use, but they are not declared as required registry credentials because local operation is possible.
Set your API key: export GEMINI_API_KEY=... # or OPENAI_API_KEY, ANTHROPIC_API_KEY
Use least-privilege or project-scoped keys, avoid placing broad secrets in shared environments, and remove keys if using a local provider.
