Prompt Evolution Engine

v1.0.0

Iteratively improve AI prompts by analyzing, rewriting, comparing, and refining them using structured patterns for clarity, structure, and format compliance.

0· 46·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for charlie-morrison/prompt-evolution-engine.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Prompt Evolution Engine" (charlie-morrison/prompt-evolution-engine) from ClawHub.
Skill page: https://clawhub.ai/charlie-morrison/prompt-evolution-engine
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install prompt-evolution-engine

ClawHub CLI

Package manager switcher

npx clawhub@latest install prompt-evolution-engine
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (prompt optimization) match the SKILL.md content: all actions are about analyzing and rewriting prompts, A/B testing, and model-specific tuning—no unrelated capabilities requested.
Instruction Scope
Runtime instructions are limited to examining and rewriting prompts, suggesting patterns, and producing formatted output. The SKILL.md does not instruct the agent to read system files, environment variables, or transmit data to third-party endpoints.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing will be downloaded or written to disk during installation.
Credentials
The skill declares no required environment variables, credentials, or config paths; this is proportionate to a prompt-editor utility.
Persistence & Privilege
always is false and there is no request to modify other skills or system-wide settings. The skill does allow autonomous invocation by default (platform normal), but it does not request elevated or persistent privileges.
Assessment
This skill appears coherent and low-risk: it only rewrites and analyzes prompts and asks for no credentials or installs. Before using, avoid pasting secrets or proprietary data into prompts you ask the skill to optimize, and review any generated prompt (especially chain-of-thought or model-specific instructions) to ensure it doesn't expose internal processes or sensitive details. If you plan to run batch tests that involve real user data or external systems, treat those test inputs as sensitive and sanitize them beforehand.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ehqw693afny78vff2vq6gth85merf
46downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Prompt Optimizer

Iteratively improve AI prompts through structured evaluation, A/B testing, and feedback-driven refinement. Use when a prompt underperforms, produces inconsistent results, or needs optimization for a specific use case.

Usage

Optimize this prompt: [paste your prompt]

Or with context:

Optimize this prompt for [goal]. Current issues: [problems]. Target model: [model name].

How It Works

  1. Analyze — identify structural weaknesses (vague instructions, missing constraints, poor examples)
  2. Rewrite — apply proven prompt engineering patterns (chain-of-thought, few-shot, role-setting, output format)
  3. Compare — generate before/after evaluation with expected improvement areas
  4. Iterate — if user provides feedback on the rewritten prompt, refine further

Optimization Patterns Applied

  • Clarity: Replace ambiguous language with specific, measurable instructions
  • Structure: Add section headers, numbered steps, output format templates
  • Constraints: Add boundaries (length, tone, forbidden patterns, edge cases)
  • Examples: Generate few-shot examples if missing
  • Chain-of-thought: Add reasoning steps for complex tasks
  • Role/persona: Set context-appropriate expertise framing
  • Output anchoring: Specify exact output format (JSON, markdown, etc.)

Parameters

ParameterDescriptionDefault
goalWhat the prompt should achieveInferred from content
modelTarget LLM (affects strategy)General-purpose
max_tokensTarget output lengthNo limit
styleconcise / detailed / creativedetailed
iterationsHow many refinement passes1

Output Format

## Analysis
[Weaknesses identified in original prompt]

## Optimized Prompt
[The improved prompt, ready to copy-paste]

## Changes Made
[Bullet list of specific improvements and why]

## Expected Impact
[What should improve: consistency, accuracy, relevance, format compliance]

Advanced Usage

Batch Optimization

Optimize these 3 prompts for the same task, pick the best approach:
1. [prompt A]
2. [prompt B]  
3. [prompt C]

A/B Test Design

Create an A/B test for this prompt. Generate variant A (structured) and variant B (conversational). Include 5 test inputs to compare.

Model-Specific Tuning

Optimize this prompt specifically for Claude Sonnet 4.6. Use extended thinking triggers and XML tags.

Comments

Loading comments...