Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Mflow Memory
v0.3.4Long-term memory engine for OpenClaw agents using M-flow knowledge graphs. Stores conversations as structured episodic memories and retrieves via graph-route...
⭐ 1· 67·0 current·0 all-time
byFANGZONG@flowelement-alexunbridled
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's files and SKILL.md align with its stated purpose (a local M-flow memory MCP service run via Docker). However the registry metadata claims no required env vars while README and setup.sh clearly require an LLM API key (LLM_API_KEY / OpenAI API key). That metadata omission is an inconsistency.
Instruction Scope
Runtime instructions tell the agent to search before replying and to save meaningful interactions — this fits a long-term memory skill. The instructions do not ask the agent to read unrelated system files or exfiltrate data to unexpected endpoints; they expect communication with the local MCP server.
Install Mechanism
There is no formal registry install spec, but included setup.sh pulls and runs a Docker Hub image (flowelement/m_flow-mcp with a pinned digest) and creates a Docker volume. Pulling and running a third-party image is expected for this functionality but carries usual risks: verify the image publisher and digest before use.
Credentials
The skill requires an LLM API key for knowledge extraction (setup.sh prompts for LLM_API_KEY and README references OpenAI API key), but the registry metadata lists no required env vars or primary credential. This mismatch is a meaningful omission: the skill will need a secret (API key) and will place it into the container environment.
Persistence & Privilege
The skill modifies the user's OpenClaw config to register an MCP server and creates a Docker volume for persistent data. It does not request 'always: true' or attempt to modify other skills' credentials. Those changes are within reasonable scope for an MCP-backed memory service, but they are persistent and should be reviewed by the user.
What to consider before installing
What to consider before installing:
- Metadata mismatch: The registry claims no required env vars, but setup.sh/README require an LLM API key (LLM_API_KEY / OpenAI key). Expect to provide a secret; verify you are comfortable supplying it.
- Docker image risk: setup.sh pulls and runs flowelement/m_flow-mcp:latest@sha256:... from Docker Hub. Verify the image name and digest, inspect the upstream repository (FlowElement) and release notes, or run the image in an isolated environment first.
- Privacy implications: The agent is instructed to "silently" search and to save conversation content. Conversations (or parts of them) will be stored persistently in a Docker volume and processed by the MCP service which uses the provided LLM key to extract knowledge. Avoid providing highly sensitive secrets or PII unless you trust the image and have reviewed its behavior.
- Configuration changes: setup.sh will create/modify ~/.openclaw/openclaw.json to register the MCP server. Back up that file before installing if you need to preserve custom config.
- Mitigations: run the container on an isolated host or VM, review the container runtime (docker inspect, docker history) and network behavior, consider using a scoped API key with usage limits for the LLM provider, and confirm you can remove the volume with the provided teardown script if needed.
If you want this skill but are uncomfortable: ask the publisher to update metadata to declare LLM_API_KEY as a required credential, provide a link to the official image source and reproducible build, or provide a signed/image manifest you can verify before running.Like a lobster shell, security has layers — review code before you run it.
latestvk97bg5qvgaxgap4m0gcpkkxjhh84p3b6
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
