Ultramemory

PassAudited by ClawScan on May 10, 2026.

Overview

Ultramemory appears purpose-aligned as a persistent agent memory tool, but users should be aware it stores conversation-derived facts, uses LLM provider credentials, and depends on an external package.

Install only if you want an agent to maintain persistent memory. Pin and trust the ultramemory package source, use a dedicated LLM API key, avoid ingesting secrets or unnecessary private details, keep the local API private, and manage the SQLite memory database as sensitive data.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the skill requires trusting the external ultramemory package source and whatever version is current at install time.

Why it was flagged

The included scripts are wrappers; the core memory engine comes from an external, unpinned package or GitHub checkout.

Skill content
pip install ultramemory
# Or from source
git clone https://github.com/jared-goering/ultramemory.git
cd ultramemory && pip install -e .
Recommendation

Install from a trusted source, pin a known-good version, and review the upstream package before using it with sensitive memories.

What this means

Ingested text may be processed using the configured LLM provider credentials, and users may need to expose an API key in the local environment.

Why it was flagged

The wrapper requires an Anthropic or OpenAI API key, which is expected for LLM-based fact extraction but gives the tool access to provider credentials.

Skill content
if [[ -z "${ANTHROPIC_API_KEY:-}" && -z "${OPENAI_API_KEY:-}" ]]; then
    echo "ERROR: ANTHROPIC_API_KEY or OPENAI_API_KEY must be set for fact extraction."
Recommendation

Use a dedicated, least-privilege API key where possible, avoid ingesting secrets, and monitor provider usage.

What this means

Private, stale, or incorrectly extracted facts could persist across sessions and influence future responses.

Why it was flagged

The skill is designed to store conversation-derived facts and later inject recalled context into future agent sessions.

Skill content
At the start of any session, hydrate context:
bash scripts/startup-recall.sh <agent-id>

After meaningful conversations, pass the text directly:
bash scripts/memory.sh ingest "User decided to use React for the frontend. Budget is $50k."
Recommendation

Only ingest information you want remembered, periodically inspect the memory database, and be cautious with sensitive personal, business, or credential-like content.

What this means

If the optional local API server is running, other local processes or agents may be able to interact with shared memory depending on the server configuration.

Why it was flagged

The startup script sends agent IDs and recall queries to a localhost memory API and prints the returned context; the caller does not include authentication.

Skill content
curl -s --max-time 5 -X POST "http://localhost:8642/api/startup-context" \
    -H "Content-Type: application/json" \
    -d "{\"agent_id\":\"${AGENT_ID}\",\"queries\":${QUERIES},\"top_k_per_query\":3}"
Recommendation

Keep the API bound to localhost, do not expose it on a network interface, and add access controls if using it in multi-agent or shared-machine setups.