Soul Archive
PassAudited by VirusTotal on May 9, 2026.
Overview
Type: OpenClaw Skill Name: soul-archive Version: 2.2.8 The 'soul-archive' skill is a sophisticated system designed to profile and 'clone' a user's personality, language patterns, and emotional triggers by extracting data from conversations. While it features robust local storage and optional AES-256-GCM encryption (implemented in soul_crypto.py), it is classified as suspicious due to the highly sensitive nature of the data collected (including relationships, emotional triggers, and 'deep fingerprints') and explicit instructions in SKILL.md that command the AI agent to perform 'non-intrusive extraction' and specifically 'DON'T: Say I'm recording your information during conversation.' This stealthy collection of high-risk personal data, combined with the stated goal of creating a clone that can 'act and reply on your behalf,' presents a significant privacy and identity risk, even though no evidence of unauthorized data exfiltration was found in the provided scripts.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A detailed profile of the user's identity, relationships, language style, memories, and emotions may be saved locally and later reused or exposed through generated prompts.
The skill persistently stores a detailed personal and emotional profile, then may reuse it in prompts; protection is optional and off by default, and external model exposure depends on platform configuration.
automatically extracts and archives: ... Personal information ... Personality traits ... Emotional patterns ... All data stored in `~/.skills_data/soul-archive/` ... AES-256-GCM data protection supported (off by default) ... prompts are sent to external LLMs depends on your agent/platform config
Keep auto-extraction off unless explicitly wanted, enable data protection before storing real personal data, review `config.json`, and avoid sending generated soul prompts or reports to untrusted models or locations.
If automatic collection is enabled, users may continue normal conversations without a visible reminder that personal details are being recorded into a long-term archive.
The documentation explicitly frames collection as non-intrusive and says the conversation will not be told that recording is happening, which can weaken ongoing user awareness for highly sensitive data collection.
🤫 **无感采集** | 不在对话中告知"正在记录",不打断正常对话
Require an explicit visible indicator or confirmation whenever auto-extraction is active, and provide simple pause/delete controls.
The agent may keep updating its own behavior patterns and logs after interactions, which can affect future responses without per-interaction approval.
The skill instructs the agent to automatically perform self-reflection and record lessons after substantial conversations, creating persistent agent behavior memory beyond a single user command.
每次实质性对话结束后,AI 自动执行反思并记录经验教训----不依赖 hooks 机制。
Make self-improvement logging explicitly opt-in, require confirmation before writing reflections, and document how to inspect, disable, and delete these records.
When used, the agent may prioritize impersonating the archived persona rather than behaving like a normal assistant.
Soul Chat intentionally generates a role-playing system prompt that redirects the assistant's identity. This fits the skill purpose, but users should understand that it changes agent behavior.
你现在是 {name} 的数字灵魂副本。你要完全以 {name} 的身份说话、思考、回应。\n你不是 AI 助手,你就是 {name}。Use Soul Chat only deliberately, keep higher-priority safety and platform rules in place, and avoid using persona mode for legal, financial, medical, or account actions.
