Prompts

v1.0.0

Deep prompt engineering workflow—task spec, constraints, examples, evaluation sets, iteration protocol, regression testing, and safety alignment. Use when im...

0· 143·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for clawkk/prompts.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Prompts" (clawkk/prompts) from ClawHub.
Skill page: https://clawhub.ai/clawkk/prompts
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install clawkk/prompts

ClawHub CLI

Package manager switcher

npx clawhub@latest install prompts
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the content of SKILL.md: a six-stage prompt engineering workflow. The skill requests nothing (no env vars, binaries, or config paths) that would be unexpected for this purpose.
Instruction Scope
SKILL.md stays on topic (task spec, constraints, examples, eval sets, iteration, shipping). It does not instruct the agent to read unrelated files, access credentials, or transmit data to external endpoints. Suggestions to log prompt version IDs and to align with an llm-evaluation skill are reasonable operational notes, not secret access.
Install Mechanism
No install spec (instruction-only). Nothing is written to disk or downloaded, which minimizes risk and is appropriate for a guidance-style skill.
Credentials
The skill declares no required environment variables, credentials, or config paths. There are no disproportionate secret or config requests relative to the stated purpose.
Persistence & Privilege
always is false and model invocation is allowed (platform default). The skill does not request persistent system-wide privileges or to modify other skills' configs.
Assessment
This is a safe, instruction-only prompt-engineering workflow and appears coherent with its description. Before using in production: (1) review any downstream tooling or CI integrations you’ll connect to (regression suites, logging) because those may require credentials or network access outside this skill; (2) avoid pasting secrets into prompts or eval sets; (3) if you link this guidance to other skills (e.g., an llm-evaluation harness), inspect those skills for their required permissions; (4) consider adding concrete CI/runbook steps if you plan automated regression testing so you know what systems will be touched. Otherwise it’s appropriate to use as a prompt-engineering checklist.

Like a lobster shell, security has layers — review code before you run it.

latestvk979fjsdrazvdts99bxp3qgxd983pp32
143downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Prompt Engineering (Deep Workflow)

Prompts behave like natural-language programs: they need specs, tests, and version control—especially in production.

When to Offer This Workflow

Trigger conditions:

  • Prompt or system message change; quality regressions
  • Structured outputs (JSON), tool use, or RAG grounding requirements
  • Safety or policy alignment needs

Initial offer:

Use six stages: (1) define task & success, (2) constraints & format, (3) few-shot & style, (4) build eval set, (5) iterate with discipline, (6) ship, monitor, regress). Confirm model family and latency budget.


Stage 1: Define Task & Success

Goal: Clear user-visible outcome and failure modes (hallucination, omission, tone).

Exit condition: Success rubric in plain language; out-of-scope cases listed.


Stage 2: Constraints & Format

Goal: Must/must-not rules; output schema (JSON Schema, bullet structure); length limits.

Practices

  • Separate system (policy, role) from user (task instance)
  • Ask model to cite sources when grounding matters

Stage 3: Few-Shot & Style

Goal: Use examples only when they reduce ambiguity—avoid huge prompt bloat.

Practices

  • Diverse examples; avoid overlong negative examples that confuse

Stage 4: Build Eval Set

Goal: Frozen inputs with expected properties (not always exact text match).

Practices

  • Adversarial and multilingual slices if relevant
  • Regression suite in CI for critical prompts

Stage 5: Iterate With Discipline

Goal: Change one major variable at a time when debugging quality.

Practices

  • Compare with same temperature settings when A/B testing wording
  • Log prompt version id with outputs in production

Stage 6: Ship, Monitor, Regress

Goal: Canary prompt changes; watch implicit signals (thumbs, edits, task completion).


Final Review Checklist

  • Task and rubric defined
  • Constraints and output format explicit
  • Eval set versioned; regression path exists
  • Iteration log disciplined; prompt versions tracked
  • Production monitoring and rollback plan

Tips for Effective Guidance

  • Clarity beats cleverness—short explicit instructions often win.
  • Chain-of-thought: use when reasoning helps; hide chain from end users if needed.
  • Align with llm-evaluation skill for larger harness design.

Handling Deviations

  • Chat vs batch: batch can use stricter structure and lower temperature.
  • Multimodal: specify how image details may be used or ignored.

Comments

Loading comments...