Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Smart Memory

v2.5.1

Persistent local cognitive memory for OpenClaw via a Node adapter and FastAPI engine.

0· 132·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for chenghaifeng08-creator/smart-memory-automaton.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Smart Memory" (chenghaifeng08-creator/smart-memory-automaton) from ClawHub.
Skill page: https://clawhub.ai/chenghaifeng08-creator/smart-memory-automaton
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install smart-memory-automaton

ClawHub CLI

Package manager switcher

npx clawhub@latest install smart-memory-automaton
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name/description match the included code: FastAPI server, Node adapter, ingestion/retrieval, background cognition, and OpenClaw hooks. The files and exported methods align with a local memory runtime. Minor mismatch: code reads an env var (COGNITIVE_EMBEDDER) and supports Nomic embeddings but requires no declared env vars in the registry metadata.
!
Instruction Scope
SKILL.md instructs agents/operators to inject active context into the agent base prompt and to run priming scripts that read local files (SOUL.md, USER.md, .session-memory-context.json, memory/YYYY-MM-DD.md). This is functionally consistent with a memory skill, but the explicit advice to modify the agent base prompt and to run startup scripts is a prompt-injection vector that can influence model behavior beyond typical tool calls. The docs also instruct reading workspace files — appropriate for memory but increases risk of accidental exposure of sensitive files if misconfigured. The changelog references earlier path-traversal fixes and allowlists, which mitigates some file-read risk but the runtime instructions still grant broad discretion to read and inject local context.
Install Mechanism
There is no automated install spec in the registry, but SKILL.md directs full local installation (python venv, pip install torch from PyTorch CPU index, pip -r requirements, npm install). The package contains postinstall.js and install scripts (postinstall, install.sh, smem-hook.sh). More importantly, the local Nomic embedder uses sentence-transformers with trust_remote_code=True and model nomic-ai/nomic-embed-text-v1.5 — this will fetch models/code from remote hosts during install/runtime. That behavior increases risk and should be reviewed before running network install steps.
!
Credentials
Registry metadata declares no required env vars or credentials, which is consistent with a local-only memory skill. However, the code checks COGNITIVE_EMBEDDER (not declared) and the Nomic embedder may fetch remote weights/code. There are no cloud API keys requested, which is good. The combination of on-disk storage under data/ and instructions to copy scripts into ~/.openclaw implies write/read access to home/workspace files — appropriate for a memory runtime but not proportionally trivial, so operators should confirm intended storage locations and permissions.
Persistence & Privilege
The skill is not force-included (always:false) and does not itself request system-wide privileges. However, documentation instructs copying priming scripts into user home and adding lines to the agent base prompt; those are manual steps that, if followed, make the agent read local context automatically at startup. That is expected for a memory skill but increases the blast radius if the skill or its prompt guidance is malicious.
Scan Findings in Context
[system-prompt-override] expected: SKILL.md explicitly asks you to add guidance to the agent base prompt and to inject [ACTIVE CONTEXT] before responses. This is expected for a memory/priming integration, but it is also a form of prompt injection that can alter model behavior — review the exact text before applying to production agents.
What to consider before installing
This package appears to implement a full local memory runtime and mostly does what it says, but there are several elevated-risk items to review before installing and wiring it into a live agent: - Prompt injection: The SKILL.md recommends adding lines to your agent base prompt and using hooks that inject [ACTIVE CONTEXT] before model responses. That will change the model's system-level guidance; inspect/modify those lines to ensure they don't give unintended privileges or instructions. - Local file access: The integration/priming steps read local files (SOUL.md, USER.md, memory/YYYY-MM-DD.md and other workspace files). Confirm the allowlist/path restrictions and verify the server's code (search for any file-read code) so it doesn't access unintended system files. The changelog mentions path-traversal fixes and an allowlist, but you should validate those protections in the code shipped to you. - Remote code/model downloads: The Nomic embedder uses sentence-transformers with trust_remote_code=True and a nomic model — installing or instantiating this will download remote model artifacts and may execute code. If you must use this embedder, prefer a controlled environment (air-gapped or vetted model wheels) or use the deterministic hashing embedder fallback. - Post-install scripts: Review postinstall.js, install.sh, and smem-hook.sh before running npm/pip install. Run installs inside an isolated virtualenv or container and avoid running as root. - Run in isolation first: Start the memory service in a container or VM with mounted data volumes you control, and verify network activity before connecting it to production agents. Monitor the service's outgoing network calls during model downloads and runtime. - If you plan to auto-wire this into an agent's startup flow, consider keeping the memory server as a sidecar process and require explicit operator approval before the agent adopts injected system prompts. If you want, I can point to the exact files/functions to audit (postinstall.js, smem-hook.sh, embeddings/nomic_embedder.py, and any file-read paths) or create a short checklist of lines to review.
examples/session-start/nodejs-agent.js:49
Shell command execution detected (child_process).
smart-memory/index.js:158
Shell command execution detected (child_process).
smart-memory/postinstall.js:14
Shell command execution detected (child_process).
smart-memory/index.js:11
Environment variable access combined with network send.
!
skills/smart-memory-v25/README.md:45
Prompt-injection style instruction pattern detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk976kqph8nwth4a3312pbneb0183g94t
132downloads
0stars
2versions
Updated 1mo ago
v2.5.1
MIT-0

Smart Memory v2 Skill

Smart Memory v2 is a persistent cognitive memory runtime, not a legacy vector-memory CLI.

Core runtime:

  • Node adapter: smart-memory/index.js
  • Local API: server.py (FastAPI)
  • Orchestrator: cognitive_memory_system.py

Core Capabilities

  • Structured long-term memory (episodic, semantic, belief, goal)
  • Entity-aware retrieval and reranking
  • Hot working memory
  • Background cognition (reflection, consolidation, decay, conflict resolution)
  • Strict token-bounded prompt composition
  • Observability endpoints (/health, /memories, /memory/{id}, /insights/pending)

Native OpenClaw Integration (v2.5)

Use the native OpenClaw skill package:

  • skills/smart-memory-v25/index.js
  • Optional hook helper: skills/smart-memory-v25/openclaw-hooks.js
  • Skill descriptor: skills/smart-memory-v25/SKILL.md

Primary exports:

  • createSmartMemorySkill(options)
  • createOpenClawHooks({ skill, agentIdentity, summarizeWithLLM })

Tool Interface (for agent tool use)

  1. memory_search
  • Purpose: query long-term memory.
  • Input:
    • query (string, required)
    • type (all|semantic|episodic|belief|goal, default all)
    • limit (number, default 5)
    • min_relevance (number, default 0.6)
  • Behavior: checks /health first, then retrieves via /retrieve and returns formatted memory results.
  1. memory_commit
  • Purpose: explicitly persist important facts/decisions/beliefs/goals.
  • Input:
    • content (string, required)
    • type (semantic|episodic|belief|goal, required)
    • importance (1-10, default 5)
    • tags (string array, optional)
  • Behavior:
    • checks /health first
    • auto-tags if missing (working_question, decision heuristics)
    • commits are serialized (sequential) to protect local CPU embedding throughput
    • if server is unreachable, payload is queued to .memory_retry_queue.json
    • unreachable response is explicit:
      • Memory commit failed - server unreachable. Queued for retry.
  1. memory_insights
  • Purpose: surface pending background insights.
  • Input:
    • limit (number, default 10)
  • Behavior: checks /health first, calls /insights/pending, returns formatted insight list.

Reliability Guarantees

  • Mandatory health gate before each tool call (GET /health).
  • Retry queue flushes automatically on healthy tool calls and heartbeat.
  • Heartbeat supports automatic retry recovery and background maintenance.

Session Arc Lifecycle Hooks

The v2.5 skill supports episodic session arc capture:

  • checkpoint capture every 20 turns
  • session-end capture during teardown/reset

Flow:

  1. Extract recent conversation turns (up to 20).
  2. Run summarization with prompt:
    • Summarize this session arc: What was the goal? What approaches were tried? What decisions were made? What remains open?
  3. Persist summary through internal memory_commit as:
    • type: "episodic"
    • tags: ["session_arc", "YYYY-MM-DD"]

Passive Context Injection

Use inject_active_context (or createOpenClawHooks().beforeModelResponse) before response generation.

This adds the standardized block:

[ACTIVE CONTEXT]
Status: {status}
Active Projects: {active_projects}
Working Questions: {working_questions}
Top of Mind: {top_of_mind}

Pending Insights:
- {insight_1}
- {insight_2}
[/ACTIVE CONTEXT]

Add this guidance line to your agent base prompt:

If pending insights appear in your context that relate to the current conversation, surface them naturally to the user. Do not force it - but if there is a genuine connection, seamlessly bring it up.

Minimal OpenClaw Wiring Example

const {
  createSmartMemorySkill,
  createOpenClawHooks,
} = require("./skills/smart-memory-v25");

const memory = createSmartMemorySkill({
  baseUrl: "http://127.0.0.1:8000",
  summarizeSessionArc: async ({ prompt, conversationText }) => {
    return openclaw.llm.complete({ system: prompt, user: conversationText });
  },
});

const hooks = createOpenClawHooks({
  skill: memory.skill,
  agentIdentity: "OpenClaw Agent",
  summarizeWithLLM: async ({ prompt, conversationText }) => {
    return openclaw.llm.complete({ system: prompt, user: conversationText });
  },
});

// Register memory.tools as callable tools:
// - memory_search
// - memory_commit
// - memory_insights
// and call hooks.beforeModelResponse / hooks.onTurn / hooks.onSessionEnd at lifecycle points.

Node Adapter Methods (Base Adapter)

  • start() / init()
  • ingestMessage(interaction)
  • retrieveContext({ user_message, conversation_history })
  • getPromptContext(promptComposerRequest)
  • runBackground(scheduled)
  • stop()

API Endpoints

  • GET /health
  • POST /ingest
  • POST /retrieve
  • POST /compose
  • POST /run_background
  • GET /memories
  • GET /memory/{memory_id}
  • GET /insights/pending

Install (CPU-Only Required)

For Docker, WSL, and laptops without NVIDIA GPUs, use CPU-only PyTorch.

# from repository root
cd smart-memory

# Create Python venv
python3 -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate

# Install CPU-only PyTorch FIRST
pip install torch --index-url https://download.pytorch.org/whl/cpu

# Then install remaining dependencies
pip install -r requirements-cognitive.txt

# Finally, install Node dependencies
npm install

PyTorch Policy

  • Smart Memory v2 supports CPU-only PyTorch only.
  • Do not install GPU/CUDA PyTorch builds for this project.
  • Use the bundled installer flow (npm install -> postinstall.js) so CPU wheels are always used.

Deprecated

Legacy vector-memory CLI artifacts (smart_memory.js, vector_memory_local.js, focus_agent.js) are removed in v2.

Comments

Loading comments...