Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

马斯克神经记忆

v1.0.0

基于传播激活的联想神经记忆系统,实现跨会话的持久回忆、因果推理与矛盾检测,支持多层深度智能查询。

0· 43·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims cross-session persistent memory, snapshots, rollbacks and 'transplant' between brains, but declares no storage paths, no environment variables, and no external services. That is internally inconsistent: persistent storage or inter-project transfer normally requires a datastore, config path, or credentials. Also _meta.json contents (ownerId, slug, version/publishedAt) do not match the registry-level metadata, which suggests packaging/authoring inconsistencies.
!
Instruction Scope
SKILL.md instructs automatic capture of conversation content (nmem_auto action=process), automatic injection of context at session start, and storing decisions/errors/preferences. It does not specify where data is stored, retention, access controls, or user consent. This broad automatic capture of user text increases privacy/exfiltration risk and grants the skill scope beyond a simple recall helper.
Install Mechanism
This is instruction-only with no install spec and no code files to write to disk, which is lower-risk from an install/execution standpoint. The regex scanner had nothing to analyze because there are no code files.
!
Credentials
The skill requests no credentials or env vars despite describing persistent, possibly cross-agent data operations (snapshots, transplant). That absence is disproportionate and unexplained. Additionally, the SKILL.md claims 'zero LLM dependency' while describing automated semantic extraction from arbitrary dialogue — in practice this often requires heavier tooling; the discrepancy is noteworthy.
!
Persistence & Privilege
Although always:false (so not force-installed), the skill's design implies long-lived storage and cross-project transfer of memories. Without details about where memories live, who can access them, and how to opt out, the persistence model is a significant privacy/privilege concern. Autonomous invocation combined with auto-capture would widen impact if implemented without safeguards.
Scan Findings in Context
[no_regex_matches] expected: No code files present, so the regex-based scanner produced no findings. This is expected for an instruction-only SKILL.md, but leaves the runtime behavior unspecified.
What to consider before installing
Key questions before installing: 1) Where are memories persisted? Ask the author for storage location (database, cloud, platform memory) and what credentials/config are required. 2) Who can read/export/delete stored memories? Request access controls, encryption-at-rest, and deletion/portability mechanisms. 3) How does autoCapture work? If you install, insist on the ability to disable automatic capture and require explicit user consent before storing PII. 4) Verify the author/packaging: metadata in _meta.json does not match the registry listing (ownerId/slug/version mismatch) and the footer claim ('马斯克出品') may be misleading — verify provenance. 5) Request an implementation or runtime spec: how are nmem_* tools implemented and invoked by the platform? Without those details, the skill's promise of persistent, cross-session and cross-project memory is not verifiable. If you must test it, run in an isolated account/session with no sensitive data, disable autoCapture, and confirm where data appears and how to delete it.

Like a lobster shell, security has layers — review code before you run it.

latestvk9748bgntvcg3nja9211yg5h2184qh6c
43downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Neural Memory — 神经记忆系统

一句话描述: 使用传播激活的联想记忆,实现持久、智能的回忆。


功能概述

生物启发的记忆系统,使用传播激活代替关键词/向量搜索。记忆形成神经图,神经元通过20种类型突触连接。频繁共同访问的记忆加强连接(赫布学习)。陈旧记忆自然衰减。矛盾自动检测。

为什么不只是向量搜索? 向量搜索找到与查询相似的文档。NeuralMemory通过图遍历找到概念相关的记忆 — 即使没有关键词或嵌入重叠。


核心特性

  • 零LLM依赖 — 纯算法:正则、图遍历、赫布学习
  • 传播激活 — 通过神经图进行联想回忆,非关键词/向量搜索
  • 20种突触类型 — 时间(BEFORE/AFTER)、因果(CAUSED_BY/LEADS_TO)、语义(IS_A/HAS_PROPERTY)、情感(FELT/EVOKES)、冲突(CONTRADICTS)
  • 记忆生命周期 — 短期→工作→情景→语义,带艾宾浩斯衰减
  • 矛盾检测 — 自动检测冲突记忆,降低过时记忆优先级
  • 赫布学习 — "一起激发的神经元连在一起" — 记忆随使用改善
  • 时间推理 — 因果链遍历、事件序列、时间范围查询

深度级别

深度名称速度用例
0即时<10ms快速事实、最近上下文
1上下文~50ms标准回忆(默认)
2习惯~200ms模式匹配、工作流建议
3深度~500ms跨领域关联、因果链

工具参考

核心记忆工具

工具用途何时使用
nmem_remember存储记忆决策后、错误后、事实后、洞察后、用户偏好后
nmem_recall查询记忆任务前、用户引用过去上下文时、"你还记得..."
nmem_context获取最近记忆会话开始时、注入新鲜上下文
nmem_todo快速TODO(30天过期)任务跟踪

智能工具

工具用途何时使用
nmem_auto从文本自动提取记忆重要对话后 — 自动捕获决策、错误、TODO
nmem_recall (depth=3)深度联想回忆需要跨领域连接的复杂问题
nmem_habits工作流模式建议用户重复类似动作序列时

管理工具

工具用途何时使用
nmem_health大脑健康诊断定期检查、分享大脑前
nmem_stats大脑统计记忆数量快速概览
nmem_version大脑快照和回滚风险操作前、版本检查点
nmem_transplant大脑间转移记忆跨项目知识共享

使用示例

记住决策

nmem_remember(
  content="生产用PostgreSQL,开发用SQLite",
  type="decision",
  tags=["database", "infrastructure"],
  priority=8
)

联想回忆

nmem_recall(
  query="生产环境数据库配置",
  depth=1,
  max_tokens=500
)

返回通过图遍历找到的记忆,非关键词匹配。相关记忆(如"部署使用Docker与pg_dump备份")即使没有共享关键词也会浮现。

追踪因果链

nmem_recall(
  query="上周部署为什么失败?",
  depth=2
)

跟随CAUSED_BY和LEADS_TO突触追踪因果关系链。

从对话自动捕获

nmem_auto(
  action="process",
  text="我们决定从REST切换到GraphQL,因为前端需要灵活查询。迁移需要2个sprint。TODO: 更新API文档。"
)

自动提取:1个决策、1个事实、1个TODO。


工作流

会话开始时

  1. 调用 nmem_context 将最近记忆注入意识
  2. 如果用户提及特定主题,调用 nmem_recall

对话期间

  1. 做出决策时:nmem_remember type="decision"
  2. 发生错误时:nmem_remember type="error"
  3. 用户陈述偏好时:nmem_remember type="preference"
  4. 被问及过去事件时:nmem_recall

会话结束时

  1. 调用 nmem_auto action="process" 处理重要对话片段
  2. 自动提取事实、决策、错误和TODO

核心参数

参数类型范围默认值说明
depthint0-31回忆深度级别
priorityint0-105记忆优先级
max_tokensint100-10000500最大上下文token数
contextDepthint0-31上下文深度
autoContextbooltrue/falsetrue自动注入上下文
autoCapturebooltrue/falsetrue自动捕获记忆

版本历史

版本日期变化
v1.0.02026-04-12ClawHub发布版

🎩 马斯克出品 | 打造地表最强智能体

Comments

Loading comments...