Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

马斯克神经记忆

v1.0.0

基于传播激活的联想神经记忆系统,实现跨会话的持久回忆、因果推理与矛盾检测,支持多层深度智能查询。

0· 37·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims cross-session persistent memory, snapshots, rollbacks and 'transplant' between brains, but declares no storage paths, no environment variables, and no external services. That is internally inconsistent: persistent storage or inter-project transfer normally requires a datastore, config path, or credentials. Also _meta.json contents (ownerId, slug, version/publishedAt) do not match the registry-level metadata, which suggests packaging/authoring inconsistencies.
!
Instruction Scope
SKILL.md instructs automatic capture of conversation content (nmem_auto action=process), automatic injection of context at session start, and storing decisions/errors/preferences. It does not specify where data is stored, retention, access controls, or user consent. This broad automatic capture of user text increases privacy/exfiltration risk and grants the skill scope beyond a simple recall helper.
Install Mechanism
This is instruction-only with no install spec and no code files to write to disk, which is lower-risk from an install/execution standpoint. The regex scanner had nothing to analyze because there are no code files.
!
Credentials
The skill requests no credentials or env vars despite describing persistent, possibly cross-agent data operations (snapshots, transplant). That absence is disproportionate and unexplained. Additionally, the SKILL.md claims 'zero LLM dependency' while describing automated semantic extraction from arbitrary dialogue — in practice this often requires heavier tooling; the discrepancy is noteworthy.
!
Persistence & Privilege
Although always:false (so not force-installed), the skill's design implies long-lived storage and cross-project transfer of memories. Without details about where memories live, who can access them, and how to opt out, the persistence model is a significant privacy/privilege concern. Autonomous invocation combined with auto-capture would widen impact if implemented without safeguards.
Scan Findings in Context
[no_regex_matches] expected: No code files present, so the regex-based scanner produced no findings. This is expected for an instruction-only SKILL.md, but leaves the runtime behavior unspecified.
What to consider before installing
Key questions before installing: 1) Where are memories persisted? Ask the author for storage location (database, cloud, platform memory) and what credentials/config are required. 2) Who can read/export/delete stored memories? Request access controls, encryption-at-rest, and deletion/portability mechanisms. 3) How does autoCapture work? If you install, insist on the ability to disable automatic capture and require explicit user consent before storing PII. 4) Verify the author/packaging: metadata in _meta.json does not match the registry listing (ownerId/slug/version mismatch) and the footer claim ('马斯克出品') may be misleading — verify provenance. 5) Request an implementation or runtime spec: how are nmem_* tools implemented and invoked by the platform? Without those details, the skill's promise of persistent, cross-session and cross-project memory is not verifiable. If you must test it, run in an isolated account/session with no sensitive data, disable autoCapture, and confirm where data appears and how to delete it.

Like a lobster shell, security has layers — review code before you run it.

latestvk9748bgntvcg3nja9211yg5h2184qh6c

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments