Install
openclaw skills install ai-tool-researchResearches how people are using an AI tool (Claude Desktop, Cursor, OpenAI Codex, Google Gemini, or OpenClaw) and generates a Productivity Playbook plus a Skills Catalog in a consistent, rated, month-over-month format. Use when the user asks for a monthly research update on one of these tools, a productivity playbook, a skills catalog, a "what's new this month" report on an AI coding/agent tool, or wants to regenerate any of the files named `<Tool>-Productivity-Playbook.md` or `<Tool>-Skills-Catalog.md`. Also use if the user wants to run the same research cycle across all five tools.
openclaw skills install ai-tool-researchGenerates two Markdown artifacts for a given AI tool, updated for the last N days:
<Tool>-Productivity-Playbook.md — how real people are using it (personas, use cases, unusual examples, links).<Tool>-Skills-Catalog.md — rated list of skills / extensions / plugins / rules / MCP servers with persona mapping.The skill is runtime-agnostic. It works in Cursor, Claude Desktop, ChatGPT (web), Codex, Gemini CLI, and OpenClaw. See Usage across runtimes.
Trigger on requests like:
<Tool>-Skills-Catalog.md with fresh ratings"| Tool key | What it is | Primary sources |
|---|---|---|
claude | Claude Desktop + Claude Code + Anthropic Skills | anthropic.com, anthropics/skills, obra/superpowers, ComposioHQ |
cursor | Cursor AI IDE + Rules / Skills / Plugins | cursor.com, cursor.directory, awesome-cursorrules |
codex | OpenAI Codex (CLI + IDE + App + Cloud) | openai.com/codex, openai/skills, openai/codex-plugins |
gemini | Google Gemini app + Gemini CLI + Code Assist + NotebookLM + Gems | ai.google.dev, gemini-cli-extensions, Piebald-AI/awesome-gemini-cli-extensions |
openclaw | Peter Steinberger's OpenClaw local AI agent | openclaw.ai, openclaw/clawhub, VoltAgent/awesome-openclaw-skills |
All five honor the agentskills.io open SKILL.md standard, so skills from one ecosystem often work in another — this is called out in every catalog.
Gather these before starting:
| Input | Required? | Default |
|---|---|---|
tool | yes | ask the user: one of claude / cursor / codex / gemini / openclaw / all |
since_date | no | 30 days before today (monthly cadence) |
output_dir | no | current working directory |
existing_file_mode | no | rewrite (default) or append-appendix (preserves body, adds "Updates since YYYY-MM-DD" appendix) |
If tool = all, loop through the five tools and produce ten files total.
Always print today's date and the since_date used at the top of every generated file so the user can verify the time window later.
Copy this checklist into the conversation and track progress:
Research Progress:
- [ ] 1. Confirm inputs (tool, since_date, output_dir, mode)
- [ ] 2. Check existing files — if present, read them to avoid regressions
- [ ] 3. Research phase — search primary sources with since_date filter
- [ ] 4. Rating phase — apply validity + usefulness rubric to every item
- [ ] 5. Compose Productivity Playbook using playbook-template.md
- [ ] 6. Compose Skills Catalog using catalog-template.md
- [ ] 7. Verify link integrity + date stamps
- [ ] 8. Write files
- [ ] 9. Append run log entry
Do not skip any step. Step 2 is important — if the files already exist, you must read them so your update reflects genuine "what's new" signal, not repeated evergreen content.
If a tool isn't specified, ask with a quick multi-choice question. Example:
Which tool should I research this month?
- Claude Desktop
- Cursor
- OpenAI Codex
- Google Gemini
- OpenClaw
- All five
Default since_date = today - 30 days. If the user already ran this skill recently, prefer since_date = last_run_date from the run log (see Step 9).
Check for these files in output_dir:
<Tool>-Productivity-Playbook.md<Tool>-Skills-Catalog.mdresearch-log.md (created by Step 9)If any exist:
<last_run_date>" section near the top.existing_file_mode = append-appendix, do NOT rewrite the body. Add a new appendix called Updates — <YYYY-MM-DD> at the end with only the deltas.anthropics/, openai/, getcursor/, google-gemini/, openclaw/<tool> since:YYYY-MM-DD for fresh real-user workflowsFor the exact search-query library per tool, read research-queries.md.
Apply date filtering to every search:
after:<since_date> (e.g., after:2026-03-21)since:<since_date>pushed:>=<since_date> on repo search; for issues/PRs use updated:>=<since_date>For every skill / rule / plugin / use case you plan to include, capture:
personas.md)A single-tool run should surface:
since_dateIf you can't hit this bar, state so explicitly in the final file and lower the confidence claims.
Apply the rating system from rating-system.md to every skill / rule / plugin entry.
Quick version (details in the reference file):
Validity = is it real + maintained?
Usefulness = editorial 1–5 stars based on breadth, docs, persona fit, time-to-value.
Be honest. If something is hyped but you couldn't confirm maintenance, rate it 🔴 and say so. Do not give sympathy stars.
Use the exact section structure in playbook-template.md. Do not renumber or rename sections — this is what makes month-over-month diffs useful.
Personas: always cover all eight from personas.md in the same order (PhD/research, solopreneur, marketer, designer, video/creator, developer, PKM/student, sales/finance/ops). If a persona has nothing meaningful for this tool, write one honest paragraph explaining why and move on.
Tone rules:
Length: 300–600 lines is the healthy range. Longer than 700 = you're padding.
Use the exact section structure in catalog-template.md.
Mandatory sections (skipping any breaks month-over-month consistency):
rating-system.md)Ratings appendix at the end is encouraged but not required if ratings are already inline.
Before writing files, run these checks:
Last updated: line exists near the top of each filesince_date and today_date are both printed./Claude-Skills-Catalog.md, etc.)personas.mdIf any check fails, fix before proceeding to Step 8.
Write to <output_dir>/<Tool>-Productivity-Playbook.md and <output_dir>/<Tool>-Skills-Catalog.md.
File-naming rules:
<Tool> capitalization: Claude, Cursor, Codex, Gemini, OpenClawClaude-Desktop-Productivity-Playbook.md (note the "Desktop"). Preserve that exact name when updating — the skill catalog drops "Desktop" and is just Claude-Skills-Catalog.md.If existing_file_mode = append-appendix, append to the existing files instead of overwriting.
Append a row to <output_dir>/research-log.md (create it if missing):
# AI Tool Research — Run Log
| Date run | Tool | since_date | Mode | New skills found | New use cases | Notable changes |
|---|---|---|---|---|---|---|
| 2026-04-21 | cursor | 2026-03-21 | rewrite | 14 | 9 | Composer 2 GA; 3 new MCP servers |
This is the source of truth for "when did I last run this" on future invocations.
When finished, reply with:
<Tool>-Productivity-Playbook.md (N lines) and <Tool>-Skills-Catalog.md (M lines), covering <since_date> → <today>."Do NOT dump the full file contents into chat. The file exists — let the user open it.
.cursor/skills/ai-tool-research/ (project-local) or ~/.cursor/skills/ai-tool-research/ (personal).~/.claude/skills/ai-tool-research/ (or the Anthropic Skills path on your OS).SKILL.md as the first message, preceded by: You are an agent following this skill definition. Apply it to my next request.playbook-template.md, catalog-template.md, personas.md, rating-system.md, research-queries.md) in subsequent turns — ChatGPT will keep them in context..md files locally.~/.gemini/skills/ai-tool-research/ (or whichever Skills path your Gemini CLI is configured with — see GEMINI.md).~/.codex/skills/ai-tool-research/.AGENTS.md if you want it auto-loaded per project.~/.openclaw/workspace/skills/ai-tool-research/.playbook-template.md — exact section structure + tone guide for the Productivity Playbookcatalog-template.md — exact section structure for the Skills Catalogpersonas.md — the 8 consistent personas, use-case angles, and what "good coverage" looks like per personarating-system.md — the full validity + usefulness rubric with decision flowchartsresearch-queries.md — reusable search-query templates per tool, per source typeRead these only when the step requires it — they're progressive disclosure to keep this top-level SKILL.md lean.
If available, the user's own previous files in output_dir are the best examples:
Claude-Desktop-Productivity-Playbook.md + Claude-Skills-Catalog.mdCursor-Productivity-Playbook.md + Cursor-Skills-Catalog.mdCodex-Productivity-Playbook.md + Codex-Skills-Catalog.mdGemini-Productivity-Playbook.md + Gemini-Skills-Catalog.mdOpenClaw-Productivity-Playbook.md + OpenClaw-Skills-Catalog.mdMatch their voice, section numbering, and level of detail. If the user's versions differ from the templates in this skill, prefer the user's version — their file is the source of truth for their stylistic preferences.
This skill is intentionally portable — no hard-coded paths, no runtime-specific features. It works because Claude, Cursor, Codex, Gemini, and OpenClaw all honor the agentskills.io open spec.