Duru Memory

ReviewAudited by ClawScan on May 10, 2026.

Overview

Duru Memory mostly matches its local Markdown memory purpose, but one helper can write outside the memory folder, so it should be reviewed before use.

Use this only if you are comfortable with persistent local Markdown memory. Before installing, restrict or patch scripts/memory-write-tag.sh so it can only write under workspace/memory, keep secrets out of memory files, keep Ollama pointed at localhost unless remote processing is intentional, and verify the Python dependencies and model downloads.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If the agent or a user supplies the wrong path, this helper can overwrite or append to arbitrary writable files, not just memory notes.

Why it was flagged

The helper writes to an absolute target or a workspace-relative target without checking that the final path is under the intended memory directory or is a Markdown memory file.

Skill content
if [[ "$TARGET" != /* ]]; then
  ABS_TARGET="$WORKSPACE/$TARGET"
fi
...
cat "$CONTENT_FILE" > "$ABS_TARGET"
Recommendation

Constrain the target to a canonical path under workspace/memory, reject absolute paths and '..' traversal, require a .md suffix, and ask for confirmation before overwrite mode.

What this means

Incorrect, sensitive, or malicious content placed in memory can influence later answers and may be retained in local index files.

Why it was flagged

The skill intentionally persists and reuses Markdown memory plus a semantic index across sessions.

Skill content
Treat the Markdown files as the source of truth... DB path: `memory/.semantic-index.db`... Daily warmup: `session-start.sh` runs `memory-semantic-search.py --build-only` once per day
Recommendation

Keep secrets out of memory, review hard-rules/current-state files regularly, and delete or rebuild the semantic index when removing sensitive memory.

What this means

With the default local Ollama setup this stays on the machine, but changing the base URL could send memory contents to another service.

Why it was flagged

Memory/query text is sent to the configured Ollama-compatible embedding endpoint. The default endpoint is localhost, but the URL is configurable.

Skill content
payload = json.dumps({"model": model, "prompt": text}).encode("utf-8")
...
f"{base_url.rstrip('/')}/api/embeddings"
Recommendation

Keep the Ollama base URL pointed at a trusted local endpoint unless you intentionally want remote processing, and do not store credentials or secrets in memory files.

What this means

The skill depends on locally installed packages and models whose exact resolved artifacts are not pinned in the provided install metadata.

Why it was flagged

The setup flow downloads Python dependencies and local model artifacts outside the registry install spec.

Skill content
uv sync
ollama pull qwen3-embedding:0.6b
ollama pull gemma4:e4b
Recommendation

Install from a trusted source, consider using a lockfile or pinned package versions, and verify Ollama model sources before production use.