Clawsoul Skill

ReviewAudited by ClawScan on May 10, 2026.

Overview

This skill mostly matches its personality-learning purpose, but it needs review because it persistently changes the agent using saved preferences and has unclear cloud/LLM data boundaries.

Before installing, decide whether you are comfortable with a skill that stores a persistent user profile and changes the assistant persona. Avoid pasting untrusted Pro tokens, check whether any cloud LLM provider/API key is enabled, and know that clearing data likely requires manually deleting ~/.clawsoul/state.json unless the publisher adds a clear/reset command.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill can learn from conversations, store a user profile, and change the assistant's speaking style.

Why it was flagged

The skill explicitly asks for chat-history access, system-prompt modification, and local storage. These are purpose-aligned for a personality-learning skill, but they are high-impact permissions users should notice.

Skill content
`read_chat_history`:分析用户偏好
- `modify_system_prompt`:动态调整语气
- `local_storage`:保存性格状态
Recommendation

Install only if you are comfortable with the assistant using chat-derived preferences to modify its persona; provide a visible way to review and disable prompt changes.

What this means

A malicious, mistaken, or overly broad injected preference could keep influencing future assistant behavior across sessions.

Why it was flagged

Persisted user preferences are inserted directly into the persona prompt. Other provided code can populate those preferences from Pro token data or LLM analysis, so unvalidated content can become persistent prompt context.

Skill content
preferences = self.mm.get_user_preferences()
...
for pref in preferences:
    pref_prompt += f"\n- {pref}"
Recommendation

Validate Pro token schemas, restrict preference values to safe labels, quote stored preferences as data rather than instructions, and add clear review/reset controls.

What this means

If a cloud provider is enabled, recent conversation content may be sent outside the local machine for analysis.

Why it was flagged

The LLM client includes non-local providers and builds analysis input from the recent conversation history. The default is local Ollama, but the artifacts do not clearly bound or disclose cloud-provider data flow.

Skill content
"qwen": {"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1" ...}
"deepseek": {"api_base": "https://api.deepseek.com/v1" ...}
...
for msg in conversation[-20:]
Recommendation

Make cloud LLM use explicit and opt-in, declare provider endpoints and data sent, and keep a local-only mode that cannot accidentally use remote providers.

What this means

Users could believe their chats never leave the device even when a remote LLM provider is configured.

Why it was flagged

This absolute privacy claim is too broad because the included LLM client supports cloud providers that can receive conversation-analysis requests.

Skill content
All data stored locally, never uploaded to cloud
Recommendation

Replace the absolute claim with accurate wording such as 'local by default with Ollama; cloud providers are optional and send selected conversation history when enabled.'

What this means

If those environment variables exist, the skill may use the associated provider account and quota.

Why it was flagged

The code can use provider API keys from environment variables, while the registry metadata declares no credentials or environment variables. This is purpose-aligned for optional cloud LLM providers but under-declared.

Skill content
"api_key": os.getenv("DASHSCOPE_API_KEY", "")
...
"Authorization": f"Bearer {api_key}"
Recommendation

Declare optional environment variables and provider credentials in metadata, and document when they are used.

What this means

It may be harder to verify the publisher, source history, or update integrity.

Why it was flagged

The artifact provenance is limited. No malicious install behavior is shown, but users have less external context for a skill that handles persistent memory and prompt modification.

Skill content
Source: unknown
Homepage: none
Recommendation

Provide a public homepage/source repository and align registry metadata with the packaged files.