Andrew Memory Layer
Analysis
The skill matches its memory-layer purpose, but it defaults to sending memory text to MiniMax and persists distilled conversation memories across sessions without clear review or deletion controls.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks whether tool use, credentials, dependencies, identity, account access, or inter-agent boundaries are broader than the stated purpose.
this.apiKey = process.env.MINIMAX_API_KEY || ''; ... 'Authorization': `Bearer ${this.apiKey}`The code uses an environment-provided MiniMax API key to authenticate provider calls; this is expected for cloud mode but is not reflected in the registry's declared required env vars or primary credential.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
this.llmMode = options.llmMode || 'api'; ... fetch('https://api.minimaxi.com/v1/embeddings', ... 'Authorization': `Bearer ${this.apiKey}` ... body: JSON.stringify({ model: 'embo-01', texts: [text], type: 'query' }))API mode is the default and sends the supplied memory or search text to MiniMax for embeddings, crossing a provider boundary with potentially sensitive long-term memory content.
const conversation = messages.map(m => `${m.role}: ${m.content}`).join('\n'); ... const response = await this._callLLM(prompt); ... await this.add({ text: cleaned, memoryType: 'distilled', ... sourceFile: 'distill' });Conversation content is summarized by an LLM and then stored as persistent memory without an artifact-shown approval or review step, so misleading or injected conversation text can become future context.
