Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Improvement Generator

v1.0.0

当需要为目标 skill 生成改进候选、把上次失败信息注入下一轮生成、或分析历史记忆模式来避免重复失败时使用。支持 --trace 注入失败上下文。不用于打分(用 improvement-discriminator)或评估(用 improvement-learner)。

0· 81·0 current·0 all-time
by_silhouette@lanyasheng

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lanyasheng/auto-improvement-generator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Improvement Generator" (lanyasheng/auto-improvement-generator) from ClawHub.
Skill page: https://clawhub.ai/lanyasheng/auto-improvement-generator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auto-improvement-generator

ClawHub CLI

Package manager switcher

npx clawhub@latest install auto-improvement-generator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description match the included code: the tool generates improvement candidates from a target skill, feedback, and failure traces. However, SKILL.md documents an evaluator-driven fix path that 'sends current SKILL.md + failures to `claude -p`' which implies use of an external LLM CLI or service. The skill declares no required binaries or credentials, so either the external LLM call is optional/undeclared or the manifest is incomplete.
!
Instruction Scope
Runtime instructions and the script read target skill files, state roots, and feedback/failure JSONs (expected). But SKILL.md explicitly describes sending SKILL.md and failures to an external LLM (Claude) for automated fixes; that behavior can transmit contextual files to an external endpoint and is not reflected in the skill's declared requirements. The instructions otherwise stay within the stated purpose (generate candidates and adjust based on trace).
Install Mechanism
No install spec is present and the skill is instruction-only with local Python scripts. There is no external download or archive extraction. This is low-risk from an installation perspective.
!
Credentials
The skill declares no required environment variables or credentials, yet SKILL.md implies invoking an external LLM CLI/service (Claude). Calling such a service typically requires either a CLI binary or API key(s). The absence of declared binaries/env vars is a mismatch and could hide a requirement for potentially sensitive credentials or an undeclared dependency.
Persistence & Privilege
Flags show always:false and no config paths or system-wide changes are requested. The skill does not request persistent system privileges or automatic always-on installation.
What to consider before installing
This skill appears to do what it says (generate candidate improvements) and the included Python implements that logic. However, SKILL.md states that when a baseline-failures source is present it will send the SKILL.md plus failures to "claude -p" to propose fixes — but the skill manifest does not declare any binary or API key requirements. Before installing or running: 1) Inspect the full scripts/propose.py (search for any subprocess/requests calls or literal 'claude' usage) to confirm whether it invokes an external CLI or network endpoint. 2) If it does call an external LLM, verify where credentials would be provided and whether any sensitive files (SKILL.md, state, or feedback) would be transmitted; require explicit consent and a dedicated API key. 3) Run the tool in a sandboxed environment or on non-sensitive test data first. 4) Ask the author/maintainer to update the manifest to declare required binaries and env vars (e.g., CLAUDE_API_KEY or required CLI) and to document exactly what data is sent externally. If you cannot confirm the external-call behavior, treat the skill as potentially exfiltrating contextual files and avoid running it on private/production skill directories.

Like a lobster shell, security has layers — review code before you run it.

latestvk97denxx5rtt7htms0ya53v8qx849s5a
81downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Improvement Generator

Produces ranked improvement candidates from target analysis, feedback signals, and failure traces.

When to Use

  • 为目标 skill 生成结构化改进候选
  • 把上次失败的 trace 注入下一轮(GEPA trace-aware)
  • 根据记忆模式避开已经失败过 >=3 次的策略

When NOT to Use

  • 给候选打分 → use improvement-discriminator
  • 评估 skill 结构 → use improvement-learner
  • 全流程 → use improvement-orchestrator

CLI

python3 scripts/propose.py \
  --target /path/to/skill \        # REQUIRED: skill directory or single file
  --state-root /path/to/state \    # default: lib/state_machine.DEFAULT_STATE_ROOT
  --source memory.json \           # repeatable: feedback/memory/baseline-failures sources
  --max-candidates 4 \             # default 4: max candidates to generate
  --trace failure_trace.json \     # inject prior failure trace for retry prioritization
  --run-id custom-run-id \         # default: auto-generated from target
  --output candidates.json \       # default: {state-root}/candidate_versions/{run-id}.json
  --lane generic-skill             # default: generic-skill
ParamDefaultWhen to change
--max-candidates4Lower to 2 for fast iteration; raise for diverse exploration
--traceNonePass when retrying after gate revert — deprioritizes failed category
--source[]Add feedback.jsonl, memory files, or evaluator baseline-failures.json
--run-idautoSet explicitly when integrating with external tracking

6 Candidate Categories

CategoryRiskExecutor SupportDescription
docslowYes (append_markdown_section)Append operator notes/limitations to Markdown docs
referencelowYes (append_markdown_section)Add control-plane-friendly notes to reference files
guardraillowYes (append_markdown_section)Add conservative auto-promote rules to guardrail docs
promptmediumNoSKILL.md prompt restructure (requires manual review)
workflowmediumNoWorkflow adapter/orchestration hook changes
testsmediumNoSmoke-check/validation test cases

Trace-Aware Generation

When --trace is provided, adjust_candidates_from_trace() deprioritizes the category that failed in the prior run and boosts alternatives:

failure_trace.json: {"candidate_id": "cand-01-docs", "reason": "gate rejected"}
→ docs candidates moved to end, reference/guardrail candidates boosted to front

Evaluator-Driven Fix (_find_evaluator_failures + _llm_propose_skill_fix)

When --source includes a baseline-failures.json (type=evaluator_baseline_failures), the generator:

  1. Reads failed task details (task_id, score, error)
  2. Sends current SKILL.md + failures to claude -p to get a targeted fix
  3. Returns an eval-fix candidate as highest priority (risk_level=low, executor_support=True)

Correction Hotspots (_find_correction_hotspots)

Scans feedback.jsonl sources for user correction events (outcome=correction|partial). Returns dimension_hint → count mapping used to prioritize candidates that address the most-corrected dimensions.

<example> 正确: 第一次生成 + 有 evaluator baseline failures $ python3 scripts/propose.py --target /path/to/skill --source baseline-failures.json --state-root ./state → 候选 1: LLM-proposed SKILL.md fix targeting failed tasks (category=prompt, risk=low) → 候选 2-4: template candidates (docs, reference, guardrail) → stdout: /state/candidate_versions/run-001.json </example> <anti-example> 错误: 同一个 category 失败 3 次后还继续重试 → 应该用 --trace 注入失败信息让 generator 自动切换到其他 category </anti-example>

Output Artifact

{"schema_version": "1.0", "run_id": "...", "stage": "proposed",
 "candidates": [{"id": "cand-01-docs", "category": "docs", "risk_level": "low",
   "execution_plan": {"action": "append_markdown_section", "section_heading": "## Operator Notes",
     "content_lines": ["..."]}, ...}],
 "failure_trace_used": false, "truth_anchor": "/state/candidate_versions/run-001.json"}

Related Skills

  • improvement-discriminator: Scores the candidates this skill produces → called by orchestrator as stage 2
  • improvement-orchestrator: Calls generator as stage 1, passes --source with failure traces
  • improvement-evaluator: Baseline failures fed back as --source to inform candidate generation

Comments

Loading comments...