agent-memory
v1.0.0Provides persistent memory for AI agents to remember facts, learn from experience, and track entities across sessions.
⭐ 0· 225·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (persistent memory, facts, lessons, entities) matches the provided code and SKILL.md. No unrelated binaries, cloud credentials, or external services are required.
Instruction Scope
SKILL.md and the CLI wrappers confine operations to storing/recalling/updating local memories and integrating with an agent lifecycle. There are no instructions to read unrelated system files, transmit data externally, or access secrets.
Install Mechanism
No install spec — code is bundled in the skill and has no external dependencies. That reduces remote code fetch risk. The package creates/uses a local SQLite DB as expected.
Credentials
The skill requests no environment variables or credentials. The only persistent resource it uses is a local SQLite DB (default ~/.agent-memory/memory.db), which is appropriate for a memory store.
Persistence & Privilege
The skill will create and write a database under the user's home directory by default and can be pointed at an arbitrary db_path. This is normal for a local memory store, but be aware a user-supplied db_path could point to sensitive locations (permissions permitting). It does not request always:true and uses normal autonomous invocation behavior.
Assessment
This skill appears to be what it says: a local SQLite-based memory for agents. Before installing, note: (1) it creates/writes a DB in ~/.agent-memory/memory.db by default — review and protect that file (file permissions, backups, encryption) if it will store sensitive info; (2) you can set db_path to any path — do not point it at system-critical files (e.g., /etc/passwd) or shared secrets; (3) the code does not perform network calls or request credentials, but any agent that uses this memory could export or transmit memories elsewhere — treat stored memories as data that your agents might surface; (4) there are minor code/quality issues (e.g., API surface hints in __init__ referencing get_memory) that are functional concerns but not indicators of malicious behavior. If you need stricter isolation, run the skill under a restricted user or in a container and review the DB contents regularly.Like a lobster shell, security has layers — review code before you run it.
latestvk97bwkxsbm5a5xmc9c9dtyak7s833zm8
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
