MindGardener

ReviewAudited by ClawScan on May 10, 2026.

Overview

MindGardener is coherent as a memory tool, but it deserves review because it can persist and reinject conversation-derived memory and its default/documented workflows may send logs to external LLM providers despite local/offline framing.

Install only if you want durable agent memory. Before enabling cron, sync, or auto-injection, decide which logs may be processed, whether external LLM providers are acceptable, and whether generated memory should be reviewed before future sessions use it.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Conversation history, personal facts, or long-term memory may be sent to a configured LLM provider unless the user explicitly chooses a local provider such as Ollama.

Why it was flagged

The documented LLM-backed commands process conversation logs and memory content, and the shown/default provider is Google. This is purpose-aligned, but it creates an external data-flow for sensitive local memory despite local-first/offline messaging.

Skill content
MindGardener reads your agent's conversation logs ... Only 3 commands need an LLM (`extract`, `surprise`, `consolidate`) ... provider: google
Recommendation

Use a local provider for private memory, or review provider settings, retention policies, and API keys before running extract, surprise, or consolidate on sensitive logs.

What this means

Sensitive facts or mistaken summaries can become part of future agent context and may spread across agents if sync is enabled.

Why it was flagged

The skill deliberately persists extracted memories and injects generated recall context into future sessions, including shared multi-agent memory. This is core functionality, but it can preserve sensitive or incorrect information and influence later agent behavior.

Skill content
Auto-injection — context ready at session start ... Multi-agent sync — merge per-agent memories to shared ... garden inject --output RECALL-CONTEXT.md
Recommendation

Inspect generated memory files, use provenance/confidence fields, disable auto-injection or sync until reviewed, and define exclusions for private or untrusted logs.

NoteHigh Confidence
ASI10: Rogue Agents
What this means

Once configured, the agent may keep updating and using memory on a schedule without a fresh prompt each time.

Why it was flagged

The recurring cron and bootstrap instructions are disclosed and user-directed, but they create ongoing autonomous memory updates and context generation after setup.

Skill content
Then add to your agent's nightly cron or BOOTSTRAP.md: garden extract && garden surprise && garden consolidate ... garden inject --output RECALL-CONTEXT.md
Recommendation

Only add the cron/bootstrap steps after reviewing the commands, and start with manual runs or dry-runs where available.

What this means

A user could install or trust the wrong package/project if following the nested documentation.

Why it was flagged

The bundle includes a secondary/older Engram skill that references a different package and homepage than the top-level MindGardener package, creating provenance and package-identity ambiguity.

Skill content
name: engram ... homepage: https://github.com/maweding/agent-engram ... pip install agent-engram
Recommendation

Verify the intended package, repository, and version before installing; prefer the top-level documented `mindgardener` package unless you intentionally want `agent-engram`.

What this means

Cloud-provider keys placed in the environment can authorize paid requests and expose memory-processing data to that provider.

Why it was flagged

Provider API keys are expected for cloud LLM use, but they are not declared as required registry credentials because local operation is possible.

Skill content
Set your API key: export GEMINI_API_KEY=...   # or OPENAI_API_KEY, ANTHROPIC_API_KEY
Recommendation

Use least-privilege or project-scoped keys, avoid placing broad secrets in shared environments, and remove keys if using a local provider.