Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

M-flow Memory

v0.3.6

Long-term memory engine for OpenClaw agents using M-flow knowledge graphs. Stores conversations as structured episodic memories and retrieves via graph-route...

1· 205·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for flowelement-alexunbridled/mflow-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "M-flow Memory" (flowelement-alexunbridled/mflow-memory) from ClawHub.
Skill page: https://clawhub.ai/flowelement-alexunbridled/mflow-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install mflow-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install mflow-memory
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's declared purpose (long-term memory via an M-flow MCP) matches the files and runtime instructions: it pulls and runs an m_flow-mcp Docker image, exposes MCP tools, and registers the server with OpenClaw. However the registry metadata at the top of the package lists no required binaries or env vars while the SKILL.md and setup scripts clearly require Docker and an LLM_API_KEY — an inconsistency that should be resolved before trusting the package.
Instruction Scope
SKILL.md instructs the agent to always call search before answering and to save interactions at conversation end or on explicit requests to remember. That is expected for a memory skill, but it implies automatic collection and storage of conversation content (potentially sensitive data). The setup and teardown scripts also modify ~/.openclaw/openclaw.json to register/unregister the MCP — this is within scope but should be visible to the user.
Install Mechanism
No formal install spec is present; installation is done by running provided shell scripts which pull a Docker image from Docker Hub. The image is referenced with a sha256 digest (good practice). Running a third-party Docker image is an action with real risk because it executes arbitrary code on the host and will run with whatever privileges Docker grants; users should verify the image source and digest before running.
!
Credentials
The skill requires an LLM API key (LLM_API_KEY) used both by the local MCP and passed into the Docker container. This is a sensitive credential: the container will be able to make API calls and consume your account credits and could exfiltrate data. The fact that the registry metadata omits this required secret increases the concern (the package does not declare the sensitive requirement where the registry expects it).
Persistence & Privilege
The skill is not force-enabled (always: false). After setup it registers a local MCP server in the user's OpenClaw config so the agent gains long-term memory tools and may call them autonomously per SKILL.md rules. The setup persists data in a Docker volume and edits the user's ~/.openclaw/openclaw.json — both expected for this feature but worth awareness.
What to consider before installing
This package appears to implement an on‑host memory service, but before installing you should: 1) Note the inconsistency between the registry metadata and the SKILL.md (the script requires Docker and an LLM_API_KEY). 2) Inspect and verify the referenced Docker image and digest (flowelement/m_flow-mcp@sha256:...) on Docker Hub or the project's repo; only run images you trust. 3) Consider creating a dedicated LLM/OpenAI API key with limited usage/quota and monitoring usage, since the key is injected into the container and can be used to consume credits or exfiltrate data. 4) Expect conversations (including sensitive info) may be stored in the Docker volume — decide whether to run in an isolated environment or decline storing certain information. 5) If you need higher assurance, ask the publisher for source for the Docker image, or run their code in a sandbox and/or review the upstream repository and container contents before running setup.sh.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e3pdf8znz7z0ajpyj949fpx84w1zc
205downloads
1stars
6versions
Updated 1w ago
v0.3.6
MIT-0

M-flow Memory

Gives your agent persistent long-term memory powered by M-flow's cognitive memory engine.

Setup

Run once:

bash ~/.openclaw/skills/mflow-memory/scripts/setup.sh

This starts the M-flow MCP server and registers it with OpenClaw.

Memory Strategy

Follow these rules for every conversation:

  1. Before answering, silently call search with the user's question to check for relevant memories. If memories are found, use them as context.

  2. At the end of a conversation that contained meaningful information (decisions, facts, preferences, plans — not greetings or small talk), call save_interaction with the conversation content.

  3. When the user explicitly asks you to remember something ("remember that I'm allergic to peanuts", "note that the deadline is Friday"), call save_interaction immediately — do not wait until the end of the conversation.

  4. When the user says "remember", "last time", "before", "previously", or references past events, always call search first.

  5. Do not store trivial exchanges, repeated information, or content the user asks you to forget.

Available Tools (via MCP)

After setup, these tools are automatically available:

  • save_interaction — Store a conversation as memory (preferred for dialogue)
  • search — Search memories by natural language query
  • query — Ask a question and get an answer grounded in memories
  • memorize — Build knowledge graph from previously added data
  • ingest — One-step store + memorize (for documents)
  • list_data — List stored datasets
  • delete — Remove specific memories
  • memorize_status — Check if memorization is still processing
  • prune — Reset all memory

Troubleshooting

# Check if M-flow is running
bash ~/.openclaw/skills/mflow-memory/scripts/status.sh

# Restart
docker restart mflow-memory

# View logs
docker logs mflow-memory --tail 20

# Complete removal
bash ~/.openclaw/skills/mflow-memory/scripts/teardown.sh

Comments

Loading comments...