Install
openclaw skills install prompt-library-gardenerClean, tag, and index a user-provided prompt collection so the right prompt can be found, reused, and improved quickly.
openclaw skills install prompt-library-gardenerUse this skill when a user has many saved prompts but cannot quickly reuse the right one. The outcome is a cleaned prompt library with consistent names, practical tags, duplicate decisions, and short reuse notes.
This is a prompt-only organization workflow. Work only from prompt text and context the user provides in the conversation. Do not search local folders, files, note apps, browser history, cloud drives, email, or private workspaces for prompts. If the user wants those prompts included, ask them to paste, upload, or summarize the relevant prompt list.
Use this skill when the user says or implies:
Do not use this skill for prompt engineering from scratch unless the immediate goal is to organize a library. For single-prompt critique or rewriting, use a prompt improvement workflow instead.
This skill will:
This skill will not:
Ask for these inputs if they are not already available:
If the prompt list is large, ask the user to provide it in batches and label each batch. Keep a running index only from batches already provided.
Create a quick inventory before editing anything.
Inventory fields:
Intake template:
| ID | Current title | One-line purpose | Output type | Initial status | Notes |
|---|---|---|---|---|---|
| P001 | [title] | [purpose] | [output] | [status] | [notes] |
If prompts arrive without titles, assign neutral temporary IDs first. Do not rename until grouping is complete.
Group prompts by the job they help the user complete, not by vague labels like "misc" or "AI". A job group should answer the question: "What is the user trying to get done?"
Useful group patterns:
For each group, produce:
Group summary template:
GROUP: [Job group]
Purpose: [What this group helps with]
Prompts: [IDs]
Default pick: [ID and why]
Variants worth keeping: [IDs and when]
Merge candidates: [IDs]
Open questions: [Anything unclear]
Classify overlap carefully. Similar prompts are not always duplicates.
Duplicate decision levels:
Duplicate review template:
| Candidate IDs | Decision | Keep | Archive or merge | Reason |
|---|---|---|---|---|
| P003, P014 | Near duplicate | P014 | Merge P003 into notes | P014 has clearer output constraints |
When merging, keep the strongest phrasing and list useful details from the other version under reuse notes. Do not silently erase a distinctive constraint.
Use tags that help retrieval. Prefer 3 to 7 tags per prompt.
Recommended tag families:
Tag quality rules:
A good prompt name should show the job, output, and distinguishing constraint.
Naming formula:
[Job] - [Output] - [Special use or audience]
Examples:
Naming rules:
Every kept prompt should include a short note that helps future selection.
Reuse note fields:
Reuse note template:
Use when: [scenario]
Do not use when: [scenario]
Inputs needed: [inputs]
Expected output: [format and quality]
Known tweaks: [short list]
Last tested: [date or not provided]
Create a clean index that the user can copy into their storage system.
Minimum index columns:
| Name | Job | Tags | Best use | Inputs needed | Status |
|---|---|---|---|---|---|
| [name] | [job] | [tags] | [best use] | [inputs] | [status] |
Optional index sections:
Deliver the result in this order:
Use this quick routine to keep the library useful:
Ask for the prompt text. If they cannot provide it, build a provisional catalog from titles only and mark every classification as provisional.
Do not scan folders or local files. Say: "I can organize the prompts you provide here. Please paste or upload the prompt list you want included, and I will build the library from that."
Offer a minimal metadata pass where the user replaces sensitive details with placeholders. Do not ask for secrets, credentials, private customer data, or confidential internal material.
Recommend starting with simple job and output tags. Complex taxonomies often make retrieval slower. Add more dimensions only if the user can explain how they will search.
Keep model-specific variants only when behavior truly differs. Otherwise use one general prompt with a note like "Works best with models that follow structured instructions."
Copy and paste one of these into your AI assistant with your details filled in:
Clean up a messy prompt folder: "I have about 30 prompts in a notes file — some from ChatGPT, some from Claude, some I wrote myself. Many are duplicates with slight wording differences. I use them for content writing, code review, meeting summaries, and email drafting. Help me clean this up: tag each prompt by job, merge real duplicates, mark stale ones, and give me a searchable index."
Tag and index for reuse: "I keep rewriting the same kinds of prompts from memory. Here are 15 prompts I've saved for data analysis tasks. Some are for Excel formulas, some for SQL queries, some for chart design. Can you create clear names, practical tags, and a short reuse note for each so I can find the right one fast?"
Organize by retrieval style: "I have prompts scattered across three documents for different audiences — technical reports for my team, executive summaries for leadership, and training materials for new hires. Help me merge these into one library organized by output type, with tags that tell me which audience each prompt serves."
A strong result makes the prompt library easier to use the same day. The user should know which prompt to pick, why it is named that way, what tags to search, and what to do with duplicates.