Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Neural Memory CN
v1.0.1神经网络启发的记忆系统,支持激活扩散和联想检索。安装后即可使用本地模式,配置 LLM 后可启用智能意图分析。/ Neural network-inspired memory with activation spreading. Works out-of-box in local mode; configure L...
⭐ 0· 157·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The code and README implement a local neural-memory system (neurons, synapses, activation spreading, storage). Required binaries (python) and the optional LLM integration align with the declared purpose. The presence of an adapter that merges results with OpenClaw's memory search is consistent with a memory-enhancement skill.
Instruction Scope
Runtime instructions and code operate on local storage under ~/.openclaw/neural-memory and expose APIs like think(), learn_and_think(), save(). However the ThinkingAdapter imports and calls openclaw.memory.memory_search and maps neurons to files such as UserProfile.md and Preferences.md — meaning the skill will access and reuse other system memory/search results. This is plausible for a memory-augmentation skill but increases the scope of data the skill can read and return.
Install Mechanism
There is no remote download/install URL; this is an instruction-and-code skill with Python scripts bundled. No extract-from-URL or installer that pulls arbitrary binaries was included. Installation appears to be via the platform's skill installer (npx clawhub) which places these files locally.
Credentials
LLM integration is optional and requires API keys, which is reasonable. However the skill references multiple environment variable conventions (SKILL.md suggests NEURAL_MEMORY_LLM_API_KEY/NEURAL_MEMORY_LLM_BASE_URL/NEURAL_MEMORY_LLM_MODEL; intent_layer._get_openrouter_key looks for OPENROUTER_API_KEY; some code tries to use the openai client if present). This multiplicity could cause the skill to pick up an existing LLM credential unexpectedly. No other unrelated credentials are requested.
Persistence & Privilege
The skill does not request 'always: true' and follows normal autonomy defaults. Its config templates enable integration options (e.g., memory_search_enhancement, create_thinking_endpoint, auto_link_knowledge) and the adapter constructs an integration that will call into platform memory — this gives it broader local data access but does not itself elevate system privileges. Review how the platform wires adapters/endpoints before enabling.
What to consider before installing
This skill appears to implement a local neural-memory system and will read and persist memory files under a ~/.openclaw/neural-memory path. Before installing: (1) Inspect where the skill will be installed (the SKILL.md suggests ~/.openclaw/skills/... while the setup writes to ~/.openclaw/neural-memory — confirm expected paths). (2) Be cautious about supplying any LLM API keys; the code looks for multiple env vars (NEURAL_MEMORY_LLM_*, OPENROUTER_API_KEY) and may use an available OpenAI/openrouter key unexpectedly. (3) The adapter integrates with the platform's existing memory/search and maps neurons to user-related files (UserProfile.md, Preferences.md). If you store sensitive profile data in platform memory, expect this skill to surface or index it unless you change protection settings. (4) If you want to proceed, review the bundled Python files locally (especially the adapter and intent layer) and run the setup in a sandbox or test environment first. If anything is unclear, ask the skill author to clarify path and env-var behavior and how integrations with openclaw.memory are gated.Like a lobster shell, security has layers — review code before you run it.
latestvk977yxx0tsea5jh2a1a4gx1a19832msw
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
Binspython
