Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Feedback Learning
v1.0.0Zero-LLM feedback learning system for OpenClaw agents. Detects user feedback (emoji reactions, text signals like "переделай"/"круто"), logs events, discovers...
⭐ 0· 74·0 current·0 all-time
byMaxim Kravtsov@surdeddd
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description match the included scripts: detection, logging, pattern analysis, and reporting all operate on a local ~/.openclaw/shared/learning store. No network, credentials, or unrelated binaries are requested. One mismatch: analyze-patterns.py writes promoted rules with a comment saying "Will be refined by LLM in cron" although no LLM or refinement step is included in the package or SKILL.md—this is unexplained and worth asking the author about.
Instruction Scope
SKILL.md tells operators to copy scripts into a shared directory, add boot-time loading of genes.json to agents, and run cron jobs. The runtime instructions and scripts read/write only files inside ~/.openclaw/shared/learning, which aligns with purpose, but agents are instructed to read promoted rules at boot and apply them automatically. Because promoted rules are derived directly from user signals/hints (user input), this creates a high risk of behavior change from small numbers of events (promotion threshold is 3 in 30 days). Also SKILL.md contained detected unicode-control-chars prompt-injection patterns, which could be an attempt to manipulate human or automated reviewers.
Install Mechanism
No install spec (instruction-only with included scripts). That lowers supply-chain risk: nothing is downloaded or executed from remote URLs. Scripts are plain Python/bash and operate locally.
Credentials
The skill requests no environment variables, no credentials, and hardcodes a local file path under the user's HOME. That is proportionate to a local feedback pipeline.
Persistence & Privilege
The skill is not forced-always, but it asks operators to add files to a shared persistent learning directory and to have agents read genes.json at boot to apply rules. That gives the skill an effective persistent influence over agent behavior. Combined with low promotion thresholds and direct use of user-supplied signals/hints, this enables easy rule poisoning or accidental behavioral changes if inputs are not validated or human-reviewed before promotion.
Scan Findings in Context
[unicode-control-chars] unexpected: The SKILL.md contained unicode-control characters flagged as potential prompt-injection. Prompt-injection tokens in documentation are not expected for a local feedback pipeline and could be an attempt to influence automated/human reviewers or to hide text. Investigate and sanitize SKILL.md before trusting its instructions.
What to consider before installing
What to consider before installing:
- The skill is coherent with its stated purpose and requires no external credentials or network access. The included scripts operate only on a local directory under $HOME and generate reports, patterns, and promoted rules.
- However, there are two important safety signals: (1) SKILL.md was flagged for unicode-control characters (possible prompt-injection), and (2) promoted rules are created automatically from user signals/hints and the agents are advised to read and apply these rules at boot. With the current default (promote at 3 occurrences in 30 days), an attacker or noisy users could poison the knowledge base and change agent behavior.
Actions you can take to reduce risk:
- Inspect and remove any suspicious unicode/control characters from SKILL.md before installing.
- Run this skill in an isolated agent or non-production environment first to observe behavior.
- Require human review before promotions: modify analyze-patterns.py to write candidate promotions to a 'pending' file instead of directly adding to genes.json, and add a manual approval step.
- Increase promotion thresholds (e.g., more than 3 occurrences and/or add manual vetting), and add provenance metadata (who triggered events) and stronger deduplication to detect automated flooding.
- Restrict which agents/users can call log-event.sh and ensure events.jsonl is writable only by a limited user/group (set filesystem permissions on the shared directory).
- Remove or clarify the LLM refinement comment: if you plan to have an LLM refine rules, make that explicit and add constraints and auditing for any LLM-run step.
If you cannot perform the above checks or do not trust the source, do not install the skill into production agents. If you proceed, enforce human review of promoted rules and lock down write access to the shared learning directory.Like a lobster shell, security has layers — review code before you run it.
latestvk97dh6sgxj8f9j64c80ht486nd838k9a
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
