Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Tinmem Memory System
v1.0.0Provides persistent memory management for storing, retrieving, updating, and deleting user-related information across conversations in OpenClaw AI.
⭐ 0· 401·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name and description (persistent memory) align with the SKILL.md tools (store, recall, update, forget). However, the SKILL.md explicitly states memories persist in a local LanceDB database while the skill provides no install steps, no config paths, and no detail about where that database lives—an implementation detail mismatch that should be clarified.
Instruction Scope
Instructions direct the agent to automatically inject memories into context before each response and automatically extract new memories after each conversation turn. That implies continual collection and reuse of potentially sensitive user data across sessions and responses, which is broader than many users expect and isn't constrained by retention, consent, or filtering rules in the doc.
Install Mechanism
No install spec or code is provided (instruction-only). That reduces immediate disk risk, but the README claims use of a local LanceDB database (which would require filesystem access and some runtime components). The lack of install/runtime details is an inconsistency to resolve.
Credentials
The skill requests no environment variables or credentials, yet its behavior involves persistent local storage and automatic data extraction/injection. There is no mention of config paths, encryption, access control, retention policy, or how deletion (forget) is enforced—so the privacy/credential model is underspecified and disproportionately open.
Persistence & Privilege
The skill does not set always:true, but it instructs the agent to persist data across sessions and to automatically inject memories into context on every response. That grants the agent broad persistence and data reuse capability; without clear limits or user consent controls, this is a meaningful privilege and privacy risk.
Scan Findings in Context
[NO_SCAN_FINDINGS] expected: The package is instruction-only (no code files) so the regex scanner found nothing. That is expected, but absence of findings is not evidence that the skill is safe—most of the surface is in the prose of SKILL.md.
What to consider before installing
Before installing or enabling this skill, ask the developer or registry operator to clarify: (1) Where exactly are memories stored (filesystem path)? Who can access that storage? (2) Is data encrypted at rest and in transit? (3) How is 'automatic extraction after each turn' scoped—what data is captured and under what rules? (4) How does memory_forget guarantee deletion (and is deletion propagated to backups)? (5) Are there retention and consent controls? (6) Who operates the LanceDB instance and what permissions does it need? If you cannot get clear answers and a trustworthy source code or homepage, treat this as high-risk for privacy and test only in a sandboxed environment or decline to install.Like a lobster shell, security has layers — review code before you run it.
ai-agentvk973x5gwq4ng1qzvpft4k1axwd81zemnlancedbvk973x5gwq4ng1qzvpft4k1axwd81zemnlatestvk973x5gwq4ng1qzvpft4k1axwd81zemnmemoryvk973x5gwq4ng1qzvpft4k1axwd81zemnpersistencevk973x5gwq4ng1qzvpft4k1axwd81zemnvector-searchvk973x5gwq4ng1qzvpft4k1axwd81zemn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
