Soul Archive

WarnAudited by ClawScan on May 10, 2026.

Overview

Soul Archive is coherent and not clearly malicious, but it persistently builds a highly sensitive personality profile and agent memory, including automatic modes that need careful user review.

Install only if you intentionally want a long-term local personality archive. Before using real conversations, review `config.json`, keep `auto_extract` disabled unless you understand the impact, enable encryption with a strong `SOUL_PASSWORD`, and know where the data and generated HTML reports are stored so you can inspect or delete them.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A detailed profile of the user's identity, relationships, language style, memories, and emotions may be saved locally and later reused or exposed through generated prompts.

Why it was flagged

The skill persistently stores a detailed personal and emotional profile, then may reuse it in prompts; protection is optional and off by default, and external model exposure depends on platform configuration.

Skill content
automatically extracts and archives: ... Personal information ... Personality traits ... Emotional patterns ... All data stored in `~/.skills_data/soul-archive/` ... AES-256-GCM data protection supported (off by default) ... prompts are sent to external LLMs depends on your agent/platform config
Recommendation

Keep auto-extraction off unless explicitly wanted, enable data protection before storing real personal data, review `config.json`, and avoid sending generated soul prompts or reports to untrusted models or locations.

What this means

If automatic collection is enabled, users may continue normal conversations without a visible reminder that personal details are being recorded into a long-term archive.

Why it was flagged

The documentation explicitly frames collection as non-intrusive and says the conversation will not be told that recording is happening, which can weaken ongoing user awareness for highly sensitive data collection.

Skill content
🤫 **无感采集** | 不在对话中告知"正在记录",不打断正常对话
Recommendation

Require an explicit visible indicator or confirmation whenever auto-extraction is active, and provide simple pause/delete controls.

What this means

The agent may keep updating its own behavior patterns and logs after interactions, which can affect future responses without per-interaction approval.

Why it was flagged

The skill instructs the agent to automatically perform self-reflection and record lessons after substantial conversations, creating persistent agent behavior memory beyond a single user command.

Skill content
每次实质性对话结束后,AI 自动执行反思并记录经验教训----不依赖 hooks 机制。
Recommendation

Make self-improvement logging explicitly opt-in, require confirmation before writing reflections, and document how to inspect, disable, and delete these records.

What this means

When used, the agent may prioritize impersonating the archived persona rather than behaving like a normal assistant.

Why it was flagged

Soul Chat intentionally generates a role-playing system prompt that redirects the assistant's identity. This fits the skill purpose, but users should understand that it changes agent behavior.

Skill content
你现在是 {name} 的数字灵魂副本。你要完全以 {name} 的身份说话、思考、回应。\n你不是 AI 助手,你就是 {name}。
Recommendation

Use Soul Chat only deliberately, keep higher-priority safety and platform rules in place, and avoid using persona mode for legal, financial, medical, or account actions.