Cortex
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill is not clearly malicious, but it advertises automatic, persistent, git-tracked memory without describing consent, scope, retention, or deletion controls.
Before installing, confirm that you want an agent memory system that can store conversation-derived information across sessions. Ask for clear controls for what gets saved, where it is stored, how to exclude secrets, how to inspect and delete memories, and whether git history preserves removed data.
Findings (2)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private conversation details, preferences, decisions, or accidental secrets could be stored and reused in future agent sessions, and git tracking may make removal harder if history is retained.
The skill explicitly describes automatically extracting conversation content into persistent memory across sessions, but the artifacts do not define user consent, scope, retention, redaction, or deletion controls.
Persistent memory for AI agents... File-based, git-tracked... Observer — Automatic memory extraction from conversations.
Use only if you want persistent agent memory; require explicit write approval, scoped storage paths, sensitive-data exclusions, memory review and deletion controls, and clear retention behavior before relying on it.
The reviewed artifact does not show the actual memory implementation, so users cannot verify from this package alone how data is stored or protected.
The supplied package is instruction-only with no install spec or code files, while the skill points to an external repository; the linked implementation was not part of the reviewed artifacts.
GitHub: [sigmalabs-ai/cortex](https://github.com/sigmalabs-ai/cortex)
Verify the linked repository, version, code, and storage behavior separately before installing or connecting it to sensitive conversations or documents.
