Ultramemory
PassAudited by ClawScan on May 10, 2026.
Overview
Ultramemory appears purpose-aligned as a persistent agent memory tool, but users should be aware it stores conversation-derived facts, uses LLM provider credentials, and depends on an external package.
Install only if you want an agent to maintain persistent memory. Pin and trust the ultramemory package source, use a dedicated LLM API key, avoid ingesting secrets or unnecessary private details, keep the local API private, and manage the SQLite memory database as sensitive data.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing the skill requires trusting the external ultramemory package source and whatever version is current at install time.
The included scripts are wrappers; the core memory engine comes from an external, unpinned package or GitHub checkout.
pip install ultramemory # Or from source git clone https://github.com/jared-goering/ultramemory.git cd ultramemory && pip install -e .
Install from a trusted source, pin a known-good version, and review the upstream package before using it with sensitive memories.
Ingested text may be processed using the configured LLM provider credentials, and users may need to expose an API key in the local environment.
The wrapper requires an Anthropic or OpenAI API key, which is expected for LLM-based fact extraction but gives the tool access to provider credentials.
if [[ -z "${ANTHROPIC_API_KEY:-}" && -z "${OPENAI_API_KEY:-}" ]]; then
echo "ERROR: ANTHROPIC_API_KEY or OPENAI_API_KEY must be set for fact extraction."Use a dedicated, least-privilege API key where possible, avoid ingesting secrets, and monitor provider usage.
Private, stale, or incorrectly extracted facts could persist across sessions and influence future responses.
The skill is designed to store conversation-derived facts and later inject recalled context into future agent sessions.
At the start of any session, hydrate context: bash scripts/startup-recall.sh <agent-id> After meaningful conversations, pass the text directly: bash scripts/memory.sh ingest "User decided to use React for the frontend. Budget is $50k."
Only ingest information you want remembered, periodically inspect the memory database, and be cautious with sensitive personal, business, or credential-like content.
If the optional local API server is running, other local processes or agents may be able to interact with shared memory depending on the server configuration.
The startup script sends agent IDs and recall queries to a localhost memory API and prints the returned context; the caller does not include authentication.
curl -s --max-time 5 -X POST "http://localhost:8642/api/startup-context" \
-H "Content-Type: application/json" \
-d "{\"agent_id\":\"${AGENT_ID}\",\"queries\":${QUERIES},\"top_k_per_query\":3}"Keep the API bound to localhost, do not expose it on a network interface, and add access controls if using it in multi-agent or shared-machine setups.
