Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Meta Skill Optimizer

v1.0.0

Self-improving AI skill optimizer that learns from feedback, auto-tunes prompts, optimizes tool usage patterns, and evolves based on success/failure analysis...

0· 120·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jason-aka-chen/meta-skill-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Meta Skill Optimizer" (jason-aka-chen/meta-skill-optimizer) from ClawHub.
Skill page: https://clawhub.ai/jason-aka-chen/meta-skill-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install meta-skill-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install meta-skill-optimizer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md and meta_optimizer.py are broadly consistent: both describe learning from successes/failures, prompt optimization, pattern storage and recommending approaches. However, the SKILL.md suggests pip installing 'numpy scipy json' while the registry lists no install spec (registry and README disagree). Also the docs reference API/hooks (e.g., record_execution, optimize_skill) that do not appear as named methods in the shown code (code provides record_success/record_failure etc.), which is an incoherence between claimed API and implementation.
!
Instruction Scope
The README explicitly recommends 'Record Everything' and shows an after_execution hook to auto-record all executions and contexts. That means the skill is intended to capture arbitrary execution context and results across skills, which may include sensitive inputs and outputs. The instructions give broad discretion to collect and merge data ('Merge Insights', 'Export Knowledge'). The SKILL.md does not describe redaction, filtering, user consent, or privacy safeguards — scope creep from 'optimizer' to wide data collection is a significant privacy concern.
Install Mechanism
No formal install spec was provided in the registry (instruction-only), but SKILL.md includes a 'pip install numpy scipy json' line. That is inconsistent with the registry metadata. Installing 'json' via pip is unnecessary (stdlib json exists); requesting SciPy may be disproportionate if it's not used. Since there's no install script in the registry, dependency installation would be manual and should be audited.
Credentials
The skill declares no required environment variables or credentials, which matches the code snapshots. The optimizer saves its knowledge base under ~/.meta_optimizer/<skill>.json by default — local storage is proportional for a learning agent, but it means potentially sensitive execution data will be written to the user's home directory. There is no visible network or telemetry in the provided snippet, but SKILL.md references 'export' and 'merge' features; those could imply network I/O if implemented elsewhere — this should be checked before use.
Persistence & Privilege
always:false (default) so it is not force-included, which is appropriate. However the README encourages hooking into after_execution and 'optimize_skill(skill)', which would let the optimizer observe and modify other skills' behavior at runtime. That is powerful: it increases blast radius (it can influence many skills) even though it's not always-on. The code writes persistent files to the home directory which is expected for a knowledge base, but users should be aware of the persistent storage of recorded executions.
What to consider before installing
This skill roughly does what it says, but before installing consider the following: 1) The SKILL.md encourages auto-recording of all executions — that can capture private inputs/outputs. Ask whether you are comfortable with the optimizer storing that data on disk (default: ~/.meta_optimizer/<skill>.json) and whether redaction/consent is enforced. 2) There is a mismatch between the README and code: the README shows a hook 'record_execution' and 'optimize_skill' but the included code exposes record_success/record_failure and other methods. Ask the author or inspect the full meta_optimizer.py to confirm the actual API and any missing functions. 3) The README suggests 'pip install numpy scipy json' while the registry lists no install spec; 'json' is stdlib (pip install is unnecessary) and SciPy may be unnecessary — audit dependencies before running pip. 4) Search the full source for any network/telemetry/export code (functions named export/merge/send/post/requests/urllib/socket) before giving it access to real data. If you must try it: run in a sandboxed environment with non-sensitive data, review/modify code to add redaction or require explicit user confirmation before recording or exporting, and periodically inspect/clear ~/.meta_optimizer. If you want, I can: (a) scan the remaining truncated portion of meta_optimizer.py for network calls or export functions, (b) list exact lines where the README and code disagree, or (c) produce a minimal safe wrapper that forces redaction and disables automatic after_execution hooks.

Like a lobster shell, security has layers — review code before you run it.

latestvk977pcq695mvd3gdk2n8y9gk5n83ceyf
120downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Meta Skill Optimizer

Self-improving AI capability that enables continuous skill enhancement.

Features

1. Feedback Learning

  • Success Analysis: Learn from successful executions
  • Failure Analysis: Understand and prevent failures
  • Pattern Recognition: Identify recurring patterns
  • Preference Learning: Adapt to user preferences

2. Prompt Optimization

  • Auto-Tuning: Optimize prompts based on outcomes
  • Chain-of-Thought: Improve reasoning chains
  • Example Selection: Dynamic few-shot example selection
  • Style Adaptation: Match user communication style

3. Tool Usage Optimization

  • Tool Selection: Choose best tools for tasks
  • Parameter Tuning: Optimize tool parameters
  • Workflow Patterns: Discover effective workflows
  • Error Recovery: Learn from tool errors

4. Self-Diagnosis

  • Capability Assessment: Know what it can/can't do
  • Knowledge Gaps: Identify missing knowledge
  • Confidence Calibration: Accurate confidence levels
  • Limitation Awareness: Know when to ask for help

5. Continuous Evolution

  • Version Tracking: Track skill improvements
  • A/B Testing: Compare approach effectiveness
  • Best Practices: Extract and codify learnings
  • Knowledge Base: Build searchable knowledge

Installation

pip install numpy scipy json

Usage

Initialize Optimizer

from meta_optimizer import SkillOptimizer

optimizer = SkillOptimizer(
    skill_name="data_analysis",
    learning_rate=0.1
)

Record Execution Result

# Record successful execution
optimizer.record_success(
    task="analyze sales data",
    approach="used pandas groupby",
    context={"data_size": "10MB", "complexity": "high"},
    outcome={"success": True, "quality": "high"}
)

# Record failure
optimizer.record_failure(
    task="predict stock price",
    approach="used linear regression",
    error="insufficient features",
    lesson="need more technical indicators"
)

Get Optimized Approach

# Get best approach for task
best_approach = optimizer.get_best_approach(
    task_type="data_analysis",
    context={"data_size": "1GB"}
)

print(best_approach)
# {'method': 'chunked_processing', 'tools': ['pandas', 'dask']}

Optimize Prompt

# Optimize prompt based on results
optimized_prompt = optimizer.optimize_prompt(
    original_prompt="Analyze this data",
    outcome="too vague",
    feedback="be more specific about analysis type"
)

print(optimized_prompt)
# "Analyze this time-series data using trend detection and seasonality analysis"

API Reference

Feedback Learning

MethodDescription
record_success(...)Record successful execution
record_failure(...)Record failed execution
get_insights()Get learned insights

Prompt Optimization

MethodDescription
optimize_prompt(...)Optimize prompt based on feedback
generate_examples(...)Generate few-shot examples
adapt_style(...)Adapt to user style

Tool Optimization

MethodDescription
suggest_tools(...)Suggest best tools
optimize_params(...)Optimize tool parameters
discover_workflow(...)Discover effective workflows

Self-Diagnosis

MethodDescription
assess_capability(...)Assess capability for task
identify_gaps()Identify knowledge gaps
calibrate_confidence()Calibrate confidence levels

Evolution

MethodDescription
track_improvement()Track improvement over time
export_knowledge()Export learned knowledge
merge_experiences()Merge from other optimizers

How It Works

1. Feedback Loop

Task → Execution → Result → Feedback → Learning → Improvement

2. Pattern Discovery

Multiple Executions → Pattern Mining → Best Practices → Codification

3. Continuous Learning

New Task → Similar Past Tasks → Learned Lessons → Optimized Approach

Use Cases

  • Prompt Engineering: Continuously improve prompts
  • Tool Selection: Better tool recommendations
  • Error Prevention: Learn from past mistakes
  • User Adaptation: Match user preferences
  • Capability Growth: Expand what AI can do

Knowledge Base

The optimizer builds a knowledge base:

{
  "patterns": {
    "data_analysis": {
      "small_data": "pandas sufficient",
      "large_data": "use dask or chunking",
      "time_series": "check stationarity first"
    }
  },
  "prompts": {
    "effective": ["specific", "contextual", "actionable"],
    "ineffective": ["vague", "ambiguous", "overly broad"]
  },
  "tools": {
    "coding": ["cursor", "claude-code"],
    "research": ["tavily", "browser"]
  }
}

Integration

With OpenClaw

# Auto-record all executions
@hookimpl
def after_execution(result, context):
    optimizer.record_execution(context, result)

With Skills

# Optimize skill behavior
skill = MySkill()
optimized_skill = optimizer.optimize_skill(skill)

Best Practices

  1. Record Everything: More data = better learning
  2. Categorize Failures: Understand failure types
  3. Update Regularly: Keep knowledge current
  4. Merge Insights: Combine learnings from multiple sources

Future Capabilities

  • Cross-skill learning
  • Automatic skill creation
  • Self-debugging
  • Automated testing

Comments

Loading comments...