White Stone Memory

PassAudited by VirusTotal on May 11, 2026.

Overview

Type: OpenClaw Skill Name: white-stone-mem Version: 1.1.0 The skill bundle defines a memory system for an AI agent, providing instructions for file management (creation, reading, appending) using standard shell commands (e.g., `mkdir`, `cat`, `echo`, `date`). It also outlines an optional vector search feature that may access `GEMINI_API_KEY` from environment variables or interact with a local Ollama instance via `curl localhost`. All instructions in `SKILL.md` are directly related to the stated purpose of managing memory files and include security-conscious rules like 'Do NOT load project memory proactively'. There is no evidence of intentional harmful behavior, data exfiltration, unauthorized remote control, or malicious prompt injection payloads. The capabilities demonstrated are standard for an agent interacting with a file system, and any potential vulnerabilities would stem from the agent's underlying execution environment rather than malicious intent within the skill itself.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Saved memory entries may influence later agent responses and tasks across sessions or sub-agents.

Why it was flagged

The skill intentionally reuses persistent memory across future agent runs and sub-agents, which is core to its purpose but can affect future behavior if the memory files contain stale, sensitive, or malicious content.

Skill content
Error log is global — All Agents must load on startup ... Knowledge loads at startup
Recommendation

Review and curate global memory files regularly, and avoid storing secrets, credentials, or untrusted instructions in auto-loaded memory.

What this means

If enabled, the skill may rely on your Gemini API key and associated account quota or permissions.

Why it was flagged

Optional vector search can use a Gemini API key. This is disclosed and purpose-aligned, but users should understand that enabling it grants the configured provider credential for embedding/search functionality.

Skill content
Gemini API | 提供 `GEMINI_API_KEY` 环境变量
Recommendation

Use a scoped, revocable API key where possible, keep it in environment variables, and leave vector search disabled if you do not need it.

What this means

If Gemini is chosen for embeddings, memory content used for indexing may be processed outside the local machine; local Ollama is the more privacy-preserving option.

Why it was flagged

The documented semantic indexing can use either a local embedding model or an external provider, but the artifact does not describe data-handling boundaries for memory text processed through the provider option.

Skill content
Embedding | Gemini API 或 Ollama + qwen3-embedding-0.6B ... `/memory build-index`
Recommendation

Use local Ollama for sensitive memories, or review the external provider’s privacy and retention terms before enabling Gemini-based vector search.