EmoClaw
v1.0.6Give your AI emotions that grow from its own memories. Emoclaw builds a unique emotional state that shifts with every conversation, decays between sessions, and evolves over time through self-calibration. Train it on your agent's identity files and watch it develop its own emotional fingerprint.
⭐ 1· 1.9k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match what the code and instructions do: extract identity/memory files, optionally label passages with Anthropic (Claude), train a small model, persist an emotion state, and inject an [EMOTIONAL STATE] block. The files and APIs used (sentence-transformers, torch) are appropriate for this task.
Instruction Scope
Runtime instructions explicitly tell the agent to read identity and memory files, prepare/train a model, persist state, and optionally send extracted passages to Anthropic for labeling. That scope is consistent with the stated goal but necessarily touches sensitive user data; the package includes redact regexes but redaction is imperfect by nature. The daemon exposes a local UNIX-socket API for inference (no auth beyond socket permissions).
Install Mechanism
There is no remote download/install spec embedded in the registry; SKILL.md describes a manual copy + venv + pip install workflow using included source. All dependencies are standard Python packages (torch, sentence-transformers, etc.). No URLs, extract steps, or obscure installers were observed.
Credentials
The skill does not require unrelated credentials. An optional ANTHROPIC_API_KEY is used only for the opt-in labeling step (bootstrap/label.py), which is consistent with the described functionality. No other secrets or external tokens are requested.
Persistence & Privilege
The skill persists state (memory/emotional-state.json, checkpoints) and can run a long-lived daemon that listens on a UNIX socket (configurable path, default /tmp/{name}-emotion.sock, created with 0o660 permissions). It does not request always:true and does not modify other skills. The socket-based interface could allow local users in the same group to query/control the daemon unless filesystem permissions and ownership are carefully managed.
Assessment
This skill appears internally coherent for its stated purpose, but it processes potentially sensitive local files (identity/memories) and can optionally send extracted passages to Anthropic for auto-labeling. Before installing:
- Review and understand which local files it will read (config.bootstrap.source_files and memory patterns) and move or redact anything you do not want processed.
- If you do not want any external network exposure, do not run the labeling/bootstrap step that requires ANTHROPIC_API_KEY (label.py/bootstrap.py call is opt-in).
- Verify the redact_patterns in the config cover any secrets you care about — regex redaction is helpful but not foolproof.
- Run the skill in an isolated environment (container or dedicated VM) and create the venv under a directory you control to avoid accidental overwrites.
- If you run the daemon, restrict socket ownership/group and permissions so only intended local users/processes can connect.
- If you need higher assurance, audit the extract.py and label.py source to confirm no unexpected network calls are made outside the documented optional labeling step.Like a lobster shell, security has layers — review code before you run it.
latestvk97dn8a1tyen2radg7ks6zgvg180x2dv
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🫀 Clawdis
