Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lena Learning

v1.0.0

Lena lernt aus jeder Konversation und verbessert sich automatisch

0· 74·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (continuous self-improvement) aligns with instructions to extract insights, update memory files, and track preferences. However the SKILL.md explicitly instructs updating AGENTS.md / TOOLS.md (other agent/skill configuration files), which is outside a narrow 'learning' purpose and could change other skills' behavior.
!
Instruction Scope
Instructions tell the agent to scan recent messages, extract corrections/preferences, and write them to files (memory/YYYY-MM-DD.md, MEMORY.md, USER.md, TOOLS.md, AGENTS.md). Those writes are broad (long-term memory + tool/agent metadata) and are not limited or scoped to safe paths. The workflow also calls for regular heartbeats and triggers 'at end of every session' and 'daily', implying recurring autonomous actions that will continually read and persist conversational data (possible sensitive PII). The skill does not declare or justify access to other agent config files it plans to edit.
Install Mechanism
Instruction-only skill with no install spec or binaries — low installation risk. No downloads or executable code included.
Credentials
No environment variables, credentials, or external endpoints are requested. That is proportionate to the stated purpose. However the skill's file-write behavior is not declared in the registry metadata (no required config paths), so file access scope is unclear.
!
Persistence & Privilege
The skill requests persistent memory files and explicitly mentions updating AGENTS.md/TOOLS.md (other agent/skill artifacts). While it is not marked always:true, the declared triggers (every session, on corrections, daily heartbeat) produce frequent autonomous activity and persistent changes to agent data/config; modifying other skills' configs is a privilege escalation risk if not confined.
What to consider before installing
This skill is coherent with 'learn from conversations' but has two practical risks: 1) It will write persistent memory files containing conversation excerpts and preferences — those can contain sensitive or private data unless you know exactly where they are stored and who can read them. 2) It explicitly instructs updating AGENTS.md / TOOLS.md (other agent/skill config), which could change other skills' behavior without clear consent. Before installing, consider: - Ask the publisher (or inspect runtime) for the exact file paths used (where memory/ and AGENTS.md will be written). Decline install unless those paths are confined to a directory you control. - Require an opt-in or manual review step before any write that modifies AGENTS.md/TOOLS.md. - Limit file permissions so only the agent identity can write, and rotate backups of existing AGENTS.md/TOOLS.md. - If you handle sensitive data, avoid enabling automatic 'save after every session' and daily heartbeats until you confirm data retention/retention policy. - If possible, run this skill in a sandbox or with a test account first. Confidence is medium because the skill is instruction-only (no executable code) so we can read its intended behavior, but we lack runtime implementation details (exact file locations, who can read/write them, and whether the platform enforces scopes). Knowing the concrete file paths and whether the platform prevents cross-skill file edits would raise confidence and could move the verdict to benign or confirm malicious behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk973xdg9k5835ypnh58kxqag0183d6jg

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments