White Stone Memory

PassAudited by ClawScan on May 1, 2026.

Overview

The skill appears transparent and purpose-aligned, but it creates persistent agent memory and offers optional vector search/API-key use that users should configure carefully.

This skill is reasonable to install if you want local file-backed agent memory. Before using it, decide which memory files should be auto-loaded, avoid storing passwords or sensitive private data, periodically review global knowledge/error logs, and keep vector search disabled unless you understand whether your embeddings will be processed locally or by Gemini.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Saved memory entries may influence later agent responses and tasks across sessions or sub-agents.

Why it was flagged

The skill intentionally reuses persistent memory across future agent runs and sub-agents, which is core to its purpose but can affect future behavior if the memory files contain stale, sensitive, or malicious content.

Skill content
Error log is global — All Agents must load on startup ... Knowledge loads at startup
Recommendation

Review and curate global memory files regularly, and avoid storing secrets, credentials, or untrusted instructions in auto-loaded memory.

What this means

If enabled, the skill may rely on your Gemini API key and associated account quota or permissions.

Why it was flagged

Optional vector search can use a Gemini API key. This is disclosed and purpose-aligned, but users should understand that enabling it grants the configured provider credential for embedding/search functionality.

Skill content
Gemini API | 提供 `GEMINI_API_KEY` 环境变量
Recommendation

Use a scoped, revocable API key where possible, keep it in environment variables, and leave vector search disabled if you do not need it.

What this means

If Gemini is chosen for embeddings, memory content used for indexing may be processed outside the local machine; local Ollama is the more privacy-preserving option.

Why it was flagged

The documented semantic indexing can use either a local embedding model or an external provider, but the artifact does not describe data-handling boundaries for memory text processed through the provider option.

Skill content
Embedding | Gemini API 或 Ollama + qwen3-embedding-0.6B ... `/memory build-index`
Recommendation

Use local Ollama for sensitive memories, or review the external provider’s privacy and retention terms before enabling Gemini-based vector search.