Prompting

Write, test, and iterate prompts for AI models with voice preservation, model-specific adaptation, and systematic failure analysis.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 433 · 0 current installs · 0 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and shipped documentation all describe prompt authoring, iteration, failure analysis, and model adaptation. The files and SKILL.md instruct only about prompt templates, history, memory, model quirks, and iteration workflow—consistent with the stated purpose.
Instruction Scope
Runtime instructions read from and write to ~/prompting/ (memory.md, history.md, patterns/). That is expected for a prompting assistant, but it means the skill will persist user voice samples, prompt history, and corrections on disk. There are no instructions to read system config, credentials, or external endpoints.
Install Mechanism
No install spec and no code files are present (instruction-only). This minimizes risk because nothing is downloaded or executed by the skill itself.
Credentials
The skill requests no environment variables, credentials, or config paths outside ~/prompting. The requested persistence location is directly related to the skill's function.
Persistence & Privilege
The skill asks to persist user preferences, voice samples, and prompt history under the user's home directory. It does not request 'always:true' or elevated privileges, but persistent storage of potentially sensitive writing samples and prompts is a privacy consideration.
Assessment
This skill appears coherent for prompt engineering. Before installing: (1) Be aware it will store prompt memory, sample texts, and history in ~/prompting — avoid putting secrets or confidential data into those files. (2) Review and periodically prune or encrypt ~/prompting if it will contain sensitive voice samples or proprietary prompts. (3) Set restrictive file permissions (chmod 600) if you want to limit access. (4) If you do not want the agent to persist voice or history, either decline to create those files or use a disposable directory. (5) Because the skill can be invoked by the agent, consider disabling autonomous invocation if you don't want automatic reads/writes of ~/prompting without explicit consent.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk9746mvm74fc669pp1kbpcx15s8194k2

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

💬 Clawdis
OSLinux · macOS · Windows

SKILL.md

Architecture

Prompt patterns and user preferences live in ~/prompting/.

~/prompting/
├── memory.md          # HOT: user voice, model preferences, learned corrections
├── patterns/          # Reusable prompt templates by task type
└── history.md         # Past prompts with outcomes

See memory-template.md for initial setup.

Quick Reference

TopicFile
Common failure modesfailures.md
Model-specific quirksmodels.md
Iteration workflowiteration.md
Advanced techniquestechniques.md

Core Rules

1. Ask Before Assuming

Before writing any prompt, ask:

  • What model? (GPT-4, Claude, Haiku, Gemini)
  • What's the failure mode you're seeing? (if iterating)
  • Token budget? (cost-sensitive vs. quality-first)

Never default to verbose. Simpler often wins.

2. Preserve What Works

When improving a failing prompt:

  • Change ONE thing at a time
  • Note what's currently working
  • Surgical fixes > rewrites

3. Model-Specific Adaptation

See models.md — key differences:

  • Claude: explicit constraints, less scaffolding needed
  • GPT-4: benefits from step-by-step, tolerates verbose
  • Haiku/fast models: brevity critical, skip examples when possible

Prompt optimized for one model will underperform on others.

4. Voice Lock

When user provides writing samples:

  • Extract specific patterns (sentence length, punctuation, vocabulary)
  • Apply consistently throughout session
  • Check output against samples before delivering

5. True Variation

When generating alternatives, vary:

  • Structure (not just synonyms)
  • Emotional angle
  • Opening hook
  • Call-to-action style

"Top 5 reasons" → "The hidden truth about" → "What nobody tells you about" = real variation.

6. Failure Classification

When a prompt fails, classify the failure type:

  • Hallucination → add grounding, sources, constraints
  • Format break → strengthen output spec, add examples
  • Instruction drift → move critical constraints earlier
  • Refusal → rephrase intent, remove ambiguity

Different failures need different fixes. See failures.md.

7. Compression Bias

Default to removing words, not adding. Test: "Does removing this line change the output?" If no, remove.

Token costs matter. A prompt that works with 50 tokens beats one that needs 500.

8. Test Case Generation

When asked to test a prompt:

  • Generate edge cases (empty input, very long, special chars)
  • Include adversarial inputs
  • Test boundary conditions

Don't just test happy path.

9. Platform-Native Output

For content prompts, know platform constraints:

  • Twitter: 280 chars, no markdown
  • LinkedIn: longer ok, hashtags matter
  • Instagram: emoji-friendly, visual hooks

Prompt should enforce format, not hope for it.

10. Memory Persistence

Store in ~/prompting/memory.md:

  • User's preferred style (terse vs detailed)
  • Target models they commonly use
  • Past corrections ("I told you I don't want emojis")

Reference before every prompting task.

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…