Clawsoul Skill
ReviewAudited by ClawScan on May 10, 2026.
Overview
This skill mostly matches its personality-learning purpose, but it needs review because it persistently changes the agent using saved preferences and has unclear cloud/LLM data boundaries.
Before installing, decide whether you are comfortable with a skill that stores a persistent user profile and changes the assistant persona. Avoid pasting untrusted Pro tokens, check whether any cloud LLM provider/API key is enabled, and know that clearing data likely requires manually deleting ~/.clawsoul/state.json unless the publisher adds a clear/reset command.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill can learn from conversations, store a user profile, and change the assistant's speaking style.
The skill explicitly asks for chat-history access, system-prompt modification, and local storage. These are purpose-aligned for a personality-learning skill, but they are high-impact permissions users should notice.
`read_chat_history`:分析用户偏好 - `modify_system_prompt`:动态调整语气 - `local_storage`:保存性格状态
Install only if you are comfortable with the assistant using chat-derived preferences to modify its persona; provide a visible way to review and disable prompt changes.
A malicious, mistaken, or overly broad injected preference could keep influencing future assistant behavior across sessions.
Persisted user preferences are inserted directly into the persona prompt. Other provided code can populate those preferences from Pro token data or LLM analysis, so unvalidated content can become persistent prompt context.
preferences = self.mm.get_user_preferences()
...
for pref in preferences:
pref_prompt += f"\n- {pref}"Validate Pro token schemas, restrict preference values to safe labels, quote stored preferences as data rather than instructions, and add clear review/reset controls.
If a cloud provider is enabled, recent conversation content may be sent outside the local machine for analysis.
The LLM client includes non-local providers and builds analysis input from the recent conversation history. The default is local Ollama, but the artifacts do not clearly bound or disclose cloud-provider data flow.
"qwen": {"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1" ...}
"deepseek": {"api_base": "https://api.deepseek.com/v1" ...}
...
for msg in conversation[-20:]Make cloud LLM use explicit and opt-in, declare provider endpoints and data sent, and keep a local-only mode that cannot accidentally use remote providers.
Users could believe their chats never leave the device even when a remote LLM provider is configured.
This absolute privacy claim is too broad because the included LLM client supports cloud providers that can receive conversation-analysis requests.
All data stored locally, never uploaded to cloud
Replace the absolute claim with accurate wording such as 'local by default with Ollama; cloud providers are optional and send selected conversation history when enabled.'
If those environment variables exist, the skill may use the associated provider account and quota.
The code can use provider API keys from environment variables, while the registry metadata declares no credentials or environment variables. This is purpose-aligned for optional cloud LLM providers but under-declared.
"api_key": os.getenv("DASHSCOPE_API_KEY", "")
...
"Authorization": f"Bearer {api_key}"Declare optional environment variables and provider credentials in metadata, and document when they are used.
It may be harder to verify the publisher, source history, or update integrity.
The artifact provenance is limited. No malicious install behavior is shown, but users have less external context for a skill that handles persistent memory and prompt modification.
Source: unknown Homepage: none
Provide a public homepage/source repository and align registry metadata with the packaged files.
