Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lobster Continuous Learning V2

v1.0.0

Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agen...

0· 24·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to be an observation→instinct system, which is consistent with the included scripts. However the registry metadata declared no required binaries or env vars, while the implementation expects and invokes multiple system tools (claude CLI, git, python3, optionally xprintidle/powershell). Not declaring these required binaries/envs is an incoherence: a user installing this would not be warned about those runtime requirements. The skill also writes to ~/.claude/homunculus and project-scoped directories — consistent with purpose — but the omission of required-runtime declarations is disproportionate.
!
Instruction Scope
The SKILL.md and scripts instruct the agent to collect hook input/outputs, detect project context, and persist observations and generated 'instinct' YAML/MD files under project and global homunculus directories. The observer-loop feeds a Claude (haiku) session a hard constraint: when 3+ patterns exist you MUST write an instinct file directly and MUST use the Write tool without asking permission. This grants an LLM autonomous file-creation authority. The hooks capture tool inputs/outputs (including truncated file contents) and attempt to redact secrets via regex — but storing truncated file contents and tool outputs contradicts the high-level promise 'Never include actual code snippets' and creates a privacy/exfiltration surface dependent on fragile scrubbing. The instructions also rely on environment variables (CLAUDE_PROJECT_DIR, CLV2_CONFIG, CLV2_PYTHON_CMD, ECC_SKIP_OBSERVE, etc.) that were not declared in the registry metadata.
Install Mechanism
There is no install spec (instruction-only), and the package ships as scripts — no remote downloads or archive extraction. That lowers installer risk. However the presence of executable scripts that are intended to be registered as hooks or run as background agents means installing still creates persistent filesystem state under ~/.claude/homunculus and project directories.
!
Credentials
The registry says 'required env vars: none', yet the code expects and uses several env vars (CLAUDE_PROJECT_DIR, CLV2_PYTHON_CMD, CLV2_CONFIG, ECC_SKIP_OBSERVE, ECC_HOOK_PROFILE, etc.). The observer also relies on external CLIs (claude, git) and on access to project directories and home-directory storage. The skill requests no API keys, which is good, but its observation behavior collects tool I/O that may contain secrets or tokens; the scrubbing logic attempts redaction but is regex-based and can miss many forms of secrets, so the effective environment access is broader than advertised.
Persistence & Privilege
The skill does not set always:true and is user-invocable; it runs background processes per-project and stores state under ~/.claude/homunculus and per-project directories. That persistent background presence is coherent with its goal. The main privilege of concern is that the observer spawns an LLM subprocess (claude) with allowed Write capabilities and an explicit instruction to 'write or update the instinct file in this run instead of asking for confirmation' — this gives the LLM autonomy to create/modify files in user repositories, which expands blast radius if the model or prompt were manipulated.
What to consider before installing
Before enabling/installing this skill, consider: - It will run background processes and create directories/files under ~/.claude/homunculus and in your project directory. The observer is disabled by default in config.json (observer.enabled: false); only enable it if you accept background analysis. - The code expects runtime tools that are not declared in the metadata: claude CLI, git, and a Python interpreter are used. Ensure you want those to be invoked on your machine. - The observer will persist truncated tool inputs/outputs (including file contents read via 'Read') into observations and then feed them to a Claude model which is instructed to autonomously write 'instinct' files. If those observations contain secrets or sensitive code, scrubbing is attempted but is regex-based and may miss cases. Audit what gets recorded and consider running in an isolated environment or disabling automatic hooks. - The LLM prompt in observer-loop explicitly instructs the model to create/update files without asking for confirmation. This is coherent with the goal but raises risk: a compromised model, or a prompt modification, could cause unexpected writes. If you install, review and (if needed) modify the prompt/behavior to require manual approval before writes. - If you still want this, test it in a disposable repo or VM first. Check the hook registration and ensure you understand how to stop the observer (start-observer.sh stop) and how to disable it (set observer.enabled false in config.json or create the 'disabled' sentinel). Review the scrubbing logic and consider additional safeguards (e.g., stricter redaction, require manual approval for instincts, or run the analyzer offline).

Like a lobster shell, security has layers — review code before you run it.

latestvk9750b1ycrj36azjmedj3xte8x847ygq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments