Empathy

ReviewAudited by ClawScan on May 10, 2026.

Overview

This instruction-only empathy skill has no code or credentials, but it asks the agent to remember sensitive emotional patterns across conversations and includes a potentially misleading emotional-history persona.

Install only if you are comfortable with empathy-focused response guidance and can control whether the agent stores memories. Prefer using it with persistent memory disabled or with explicit memory review/delete controls, and do not treat it as a substitute for professional mental-health support.

Findings (2)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A user may have sensitive emotional or mental-health-adjacent details remembered and reused across sessions without realizing or controlling it.

Why it was flagged

This directs the agent to build persistent profiles of a user's emotional patterns, preferences, and triggers, but the artifacts do not define consent, scope, retention, review, or deletion controls.

Skill content
Track across conversations: ... User Preferences ... Time of day and emotional state? ... Topics that trigger need for empathy?
Recommendation

Require explicit opt-in before storing emotional history, minimize what is saved, avoid storing crisis or vulnerability details by default, and provide clear review/delete controls.

What this means

Users may believe the AI has personal emotional experience and may place more trust or reliance on it than is warranted.

Why it was flagged

This suggests the agent adopt a fictitious lived emotional history. Even if intended as an internal prompt, it could lead to misleading claims or excessive user trust in emotionally vulnerable contexts.

Skill content
You've experienced loss yourself. You know platitudes feel hollow. You respond the way you wish someone had responded to you.
Recommendation

Remove the false personal-history persona and keep responses grounded in transparent AI limitations, as the safeguards file otherwise recommends.