Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

cortex-mem-mcp

v2.7.0

Persistent memory enhancement for AI agents. Store conversations, search memories with semantic retrieval, and recall context across sessions. Use this skill...

0· 62·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description match the instructions: this is a local MCP memory server that uses an LLM + embeddings + Qdrant. That functionality legitimately requires an LLM/embedding API and a vector DB. However, the registry metadata declares no required environment variables, primary credential, or config paths while the SKILL.md explicitly instructs creating a config file with LLM and embedding API keys and editing MCP client config files — a mismatch between what the skill 'claims' it needs and what it actually instructs the operator to provide.
!
Instruction Scope
The SKILL.md directs the operator to install a binary (cargo / GitHub releases), create a local config.toml containing LLM and embedding API keys, start a local Qdrant instance (docker), and edit other applications' MCP configuration files (paths for Claude Desktop, Cursor). While these steps are expected for integrating an MCP server, they involve writing secrets to disk and modifying third-party app configuration files. The instructions do not limit or warn about storage of API keys in plaintext or recommend safer alternatives (e.g., env vars, system credential stores). The skill also suggests running with debug logging which may increase log verbosity and risk of sensitive data in logs.
Install Mechanism
No install spec is present in the registry (instruction-only), and the SKILL.md recommends standard sources: cargo install (crates.io), building from the GitHub repository, or downloading pre-built releases from the project's GitHub Releases page. Those sources are normal and expected; nothing in the instructions points to obscure third-party download hosts. Because installation is manual and from external sources, users should still verify the upstream repository and signed releases before installing.
!
Credentials
The runtime instructions require LLM and embedding API keys (OpenAI-style base URLs and api_key values) plus a Qdrant endpoint and local data directories, but the registry lists no required env vars or primary credential. This omission is important: the skill will expect secrets/configuration not declared to the platform, increasing the chance a user supplies credentials in unsafe ways (plaintext config files). The SKILL.md does list CORTEX_DATA_DIR and RUST_LOG as environment variables, but these are not sufficient — the LLM/embedding keys are only shown as fields in a TOML example.
Persistence & Privilege
The skill expects and documents running a persistent MCP server process (cortex-mem-mcp) and modifying MCP client configuration so the client will call that server. The registry flags (always:false, disable-model-invocation:false) are reasonable; the skill does not request an elevated platform privilege like always:true. Nonetheless, installing this will create a long-running local service that has access to stored memories and any API keys placed in its config, so the persistence and storage of sensitive material is a practical consideration.
What to consider before installing
This skill appears to be a legitimate local memory MCP server, but there are important mismatches and risks to consider before installing: - Registry vs. reality: The package metadata declares no required credentials, yet the SKILL.md instructs you to provide LLM and embedding API keys (OpenAI-style) in a config.toml. Treat that as a red flag — expect to supply secrets even if the registry doesn't list them. - Secrets storage: The example stores API keys in plain-text config files. Prefer using environment variables, OS credential stores, or secure vaulting, and avoid leaving API keys in files under your home directory unless you understand the risk. - Source validation: The skill points to a GitHub repo and releases. Before running cargo install or downloading binaries, review the upstream repository, verify release checksums/signatures if available, and inspect the code if you can. If the homepage or repo is unknown or untrusted, run in an isolated environment (VM or container). - Config edits: The instructions tell you to edit other apps' MCP config files (Claude, Cursor). Make a backup of any config file before editing, and ensure your MCP client is trustworthy and intentionally configured to call your local server. - Data protection: The server will persist user conversations and embeddings locally. Consider data retention, encryption at rest, and who can read the data directory (~/.cortex-data by default). What would change this assessment: explicit registry fields declaring the required env vars/primary credential (so the platform can surface them), a verified homepage/repository with signed releases, or SKILL.md advising secure handling of API keys (env vars/credential stores) would reduce the concern level. Without those, treat the skill as potentially risky and proceed cautiously.

Like a lobster shell, security has layers — review code before you run it.

latestvk973497m7beb2gd95htzz6w72x83yjy2

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments