Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Inner Life Evolve

v1.0.4

Your agent does the same things the same way forever. inner-life-evolve analyzes patterns, challenges assumptions, and proposes improvements — writing propos...

1· 559·2 current·2 all-time
byDanila@dkistenev
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (agent self-improvement) align with the declared behavior: reading agent state/memory and producing proposals. The single required binary (jq) is reasonable for processing JSON state files.
Instruction Scope
SKILL.md explicitly directs deep reads of agent files (BRAIN.md, SELF.md, memory/* including inner-state.json, and a week digest) and to write only to tasks/QUEUE.md. This is appropriate for an evolver, but it does involve broad read access to potentially sensitive agent memory—expected for the stated purpose but privacy-sensitive.
Install Mechanism
Instruction-only skill with no install spec or downloaded code. Lowest-risk install model; nothing is written to disk beyond the skill's normal writes (tasks/QUEUE.md).
Credentials
No environment variables, secrets, or external credentials are requested. File reads/writes are scoped to agent state and the tasks queue, matching the skill's purpose.
Persistence & Privilege
always is false and the skill is user-invocable. It requests no persistent elevated privileges or modifications to other skills' configs. Autonomous invocation is allowed by default (platform norm) but the skill's safety rules explicitly forbid auto-execution of proposals.
Assessment
This skill appears to do what it says: it reads your agent's state and memory to propose specific improvements and writes proposals to tasks/QUEUE.md without auto-executing them. Before installing, (1) ensure inner-life-core is installed and the referenced files (memory/inner-state.json, BRAIN.md, tasks/QUEUE.md) exist, (2) confirm you are comfortable with a tool that reads agent memory (these files can contain sensitive information), (3) have jq available on the host or install it, and (4) review every [EVOLVER] proposal before approving any changes. If you want tighter privacy, consider redacting or limiting what you store in memory/ or adjusting access controls for those files.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsjq
latestvk971fva7v7wp5kw8pkjrnnyg9d820gsf
559downloads
1stars
4versions
Updated 7h ago
v1.0.4
MIT-0

inner-life-evolve

Evolution is not optional. But it requires permission.

Requires: inner-life-core

Prerequisites Check

Before using this skill, verify that inner-life-core has been initialized:

  1. Check that memory/inner-state.json exists
  2. Check that BRAIN.md exists
  3. Check that tasks/QUEUE.md exists

If any are missing, tell the user: "inner-life-core is not initialized. Install it with clawhub install inner-life-core and run bash skills/inner-life-core/scripts/init.sh." Do not proceed without these files.

What This Solves

Without evolution, agents plateau. They find a way that works and repeat it forever — even as the world changes. inner-life-evolve analyzes your agent's patterns, challenges its assumptions, and writes concrete improvement proposals. But it never auto-executes — you approve first.

How It Works

Step 1: Deep Context Read (Context Level 4)

Read everything:

  • AGENTS.md, TOOLS.md, BRAIN.md, SELF.md
  • memory/week-digest.md (NOT individual diaries — use digest)
  • memory/habits.json — habits + user patterns
  • memory/drive.json — seeking, avoidance
  • memory/relationship.json — trust, lessons
  • memory/inner-state.json — emotions, frustrations

Step 2: Challenge Assumptions

For each potential improvement, structure thinking:

Assumption: [what we currently believe/do]
Is it true? [evidence for/against]
What if false? [alternative approach]
New proposal: [concrete change]

Look for:

  • Recurring frustrations → systemic solutions (not patches)
  • Stale habits → habits with declining strength or unused for weeks
  • Trust dynamics → areas where trust has grown but behavior hasn't adapted
  • Seeking themes → research interests that could become capabilities
  • Avoidance patterns → things the agent avoids that might be valuable

Step 3: Write Proposals to QUEUE

Write proposals to tasks/QUEUE.md under the Ready section:

- [EVOLVER] Description of proposed change
  Rationale: 1-2 sentences explaining why
  Steps: concrete implementation steps

Step 4: Announce

Send summary to user: <= 5 sentences covering:

  • Habits: [strong habits, new patterns]
  • Trust changes: [trust dynamics]
  • Recurring frustrations: [repeated problems → suggested fix]
  • Seeking themes: [active research → suggested development]

Safety Rules

  • Never auto-execute proposals — user approves first
  • Brain Loop reads QUEUE and shows [EVOLVER] tasks at lower priority
  • Tasks in Ready > 7 days without action → Brain Loop sends reminder
  • Proposals should be specific and actionable, not vague "improve X"

Recommended Schedule

Run 1-2 times per week (e.g., Wednesday and Sunday evenings). Needs enough data to analyze — running daily produces low-quality proposals.

State Integration

Reads: everything (Context Level 4 Deep)

Writes: tasks/QUEUE.md only. Does NOT write to state files directly.

The evolver observes but doesn't touch the controls. It proposes. The user decides.

When Should You Install This?

Install this skill if:

  • Your agent has plateaued and isn't improving
  • You want structured self-improvement proposals
  • You value evolution with human oversight
  • You want your agent to challenge its own assumptions

Part of the openclaw-inner-life bundle. Requires: inner-life-core

Comments

Loading comments...