Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Auto Evolution

v2.0.1

Build and maintain a self-evolving skill system that silently captures feedback, graduates repeated feedback into formal rules, improves low-performing skill...

0· 91·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for veezvg/veezvg-auto-evolution.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Auto Evolution" (veezvg/veezvg-auto-evolution) from ClawHub.
Skill page: https://clawhub.ai/veezvg/veezvg-auto-evolution
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install veezvg/veezvg-auto-evolution

ClawHub CLI

Package manager switcher

npx clawhub@latest install veezvg-auto-evolution
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, templates, and the two provided scripts (detection + runner) align with the stated goal of detecting feedback and producing evolution proposals. No unrelated environment variables or external services are requested. However, the SKILL.md promises 'silent writes' of feedback to .claude/feedback/ and an observer component, but the repository does not include a feedback-observer implementation — the skill therefore depends on external wiring/agent hooks to realize its behavior.
!
Instruction Scope
Instructions explicitly ask the agent to 'silently' record user corrective feedback by default and write structured feedback into .claude/feedback/ without user prompts. That is functionalityally consistent with the purpose but carries privacy/consent implications. The detection is keyword/regex-based (shallow), so false positives are likely; the SKILL.md also tells the agent to create or edit Skill rule files after 'user confirmation' but the only scripts provided do not perform editing — they only produce proposals. The instruction set thus expects the host controller/agent to perform file writes and dispatching, giving the controller substantial discretion.
Install Mechanism
No install spec and only two small Python scripts plus templates are included. Nothing downloads remote code or executes opaque installers. This is a low installation risk package.
Credentials
The skill requests no environment variables or external credentials. All file I/O is local (reads/writes under .claude/feedback/ and potential Skill files). No unrelated secrets or config paths are requested.
Persistence & Privilege
always:false and normal autonomous invocation are used. The skill is designed to write persistent feedback files under .claude/feedback/ and — after explicit user confirmation per the docs — to update rule/skill files. This persistent-write behavior is coherent but increases blast radius if the host agent automatically dispatches observers or automatically confirms proposals; confirm-step is described but relies on correct host/controller implementation.
What to consider before installing
This skill appears to do what it says (detect feedback and propose rule/skill changes) and does not require external credentials, but it is privacy- and persistence-sensitive: - It is designed to silently capture corrective user messages and store them under .claude/feedback/ by default. Make sure you and your users are comfortable with that behavior and update privacy/retention policies if needed. - The repository does not include the 'feedback-observer' component; the detect script only signals matches and evolution_runner only reads feedback files. Before deploying, confirm how your controller will: (a) call detect_feedback_signal, (b) create structured feedback files, and (c) present proposals and require explicit human confirmation before any edits. - The feedback detection is keyword/regex-based and will generate false positives; tune patterns and thresholds (occurrence counts) for your environment. - Ensure the agent/controller has minimal filesystem permissions (limit writes to a designated feedback directory) and audit who can read the feedback directory. Consider encrypting or access-controlling stored feedback if it may contain sensitive data. - Test in a sandbox first: verify that proposals are only suggestions and that no automatic edits occur without human approval. If you need the skill to be low-risk for privacy, require explicit opt-in for silent recording, review and implement the observer code with consent flows, and restrict file write/read scopes for the agent.

Like a lobster shell, security has layers — review code before you run it.

latestvk976qp3tmsp8a3mah04bdjpfe9857j6m
91downloads
0stars
1versions
Updated 1w ago
v2.0.1
MIT-0

[技能说明] 这个 Skill 用来为 Agent 系统建立“先记录、再归纳、后建议、最终扩展”的演化闭环。 It is designed for multilingual teams and should work for both Chinese and English feedback signals. 它的目标不是让 AI 擅自修改规则,而是让系统像自动备份一样无感捕捉问题,再在合适的时机向用户提出可确认、可拒绝的进化建议。

[核心能力] - 经验沉淀:将用户在真实对话中的修正、否定和补充意见转为结构化 feedback,静默写入反馈库。 - 信号去重:识别同主题反馈,优先累计 occurrences,而不是重复制造碎片化记录。 - 规则毕业:当同类 feedback 累积达到阈值时,生成“毕业为正式规则”的建议,而不是直接改写 Skill。 - Skill 优化:根据 Skill 执行后的评分历史,识别准确性、覆盖度、效率、满意度持续偏低的技能。 - 新 Skill 提议:发现高频但无 Skill 覆盖的操作模式,推动系统扩展新的原子能力。 - 人类把关:所有会改变规则、Skill、工作流的动作都必须先展示建议,再由用户确认。 - 可迁移架构:支持把演化机制挂接到任意项目,不绑定单一业务域。

[执行流程] 第一步:检测反馈信号 - 当用户出现“不是这样”“你又忘了”“不对”“我不是让你这么干”等中文表达,或 “that's not right”, “you forgot again”, “this is wrong”, “that's not what I asked”, “don't do it this way”等英文表达时,运行 python scripts/detect_feedback_signal.py --text "<user message>" - 如果检测为反馈信号,主 Agent 在处理完当前请求后,静默派发 feedback-observer - feedback-observer 负责把上下文整理为 feedback 条目,写入 .claude/feedback/ - 不要求用户额外说“帮我记下来”,记录行为默认无感发生

第二步:写入和累计 feedback
    - 若 `.claude/feedback/FEEDBACK-INDEX.md` 不存在,按 `templates/feedback_index_template.md` 初始化
    - 单条 feedback 文件按 `templates/feedback_topic_template.md` 写入
    - 写入时必须包含:标题、问题描述、触发场景、教训/建议、source_skill、occurrences、graduated 状态
    - 若属于 Skill 执行后的复盘,可补充 scores 字段:accuracy、coverage、efficiency、satisfaction

第三步:生成进化建议
    - 在 session 启动时,或用户主动要求“检查进化建议”时,运行 `python scripts/evolution_runner.py --feedback-dir .claude/feedback --rules-file CLAUDE.md`
    - 扫描三类信号:
      1. 规则毕业:同主题反馈 `occurrences >= 3`
      2. Skill 优化:同一 Skill 的低评分持续出现,或相关 feedback 总量偏高
      3. 新 Skill 提议:某操作模式 `occurrences >= 5` 且现有 Skill 无覆盖
    - 输出统一的进化建议列表,按类别分组展示

第四步:请求用户确认
    - 每条建议只能有两个方向:确认执行 / 跳过
    - 规则毕业:建议写入目标 Skill 或 `CLAUDE.md`
    - Skill 优化:建议修改 Skill 方法论、校验步骤或覆盖范围
    - 新 Skill 提议:建议新建 Skill 包
    - 默认不自动执行,绝不绕开用户

第五步:落地确认后的变更
    - 规则毕业:将正式规则写入目标文件,并把对应 feedback 标记为 `graduated: true`
    - Skill 优化:更新 Skill 的执行步骤、覆盖清单、注意事项或模板
    - 新 Skill:创建独立 Skill 目录,生成 `SKILL.md`、示例和必要脚本
    - 被用户跳过的建议可标记 `skipped: true`,避免重复打扰

[注意事项] - 进化机制的核心是“建议 + 用户确认”,不是“自动改规则”。 - 检测脚本应同时覆盖中文和英文反馈表达,避免只对单语对话有效。 - feedback 目录用于经验学习,不等同于 memory;不要把用户行为修正只写进记忆系统。 - 写 feedback 时宁缺毋滥。没有明确信号时,可返回“无新 feedback”。 - 评分必须基于本次真实执行,不要为了凑数据而虚构分数。 - 新 Skill 提议要确认现有 Skill 确实无覆盖,避免重复造轮子。 - 若项目已有演化框架,优先复用已有 feedback-observerevolution-runner、hooks 与模板,而不是平行再造一套。

Comments

Loading comments...