Skill Maker

v1.1.1

Create new agent skills from scratch. Use when: (1) Building specific capabilities, (2) Converting workflows into reusable skills, (3) Designing skill struct...

0· 413·2 current·2 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description claim this is a skill-authoring template and the SKILL.md contains templates, process steps, and file/directory conventions appropriate for that purpose. There are no environment variables, binaries, or install steps that would be unrelated to creating skills. Minor metadata inconsistency: SKILL.md frontmatter/version header lists 1.1.1 while _meta.json reports 1.1.0 (possible housekeeping issue, not a security problem).
Instruction Scope
Runtime instructions are authoring guidance and templates (how to write SKILL.md, directory layout, testing steps). They do not instruct the agent to read arbitrary system files, access credentials, or send data to external endpoints. Some guidance suggests 'read old first' when replacing an existing skill — which is reasonable for a skill author but does not imply automatic file system access in this package.
Install Mechanism
No install spec and no code files — instruction-only. This is the lowest-risk install posture because nothing from remote sources will be downloaded or written to disk as part of installation.
Credentials
No required environment variables, credentials, or config paths are declared. The SKILL.md does not reference any secrets or external service tokens. Proportionality is appropriate for an authoring/template skill.
Persistence & Privilege
always is false and disable-model-invocation is default (agent may invoke autonomously), which is normal for skills. The skill does not request persistent system-level presence or modify other skills' configurations.
Assessment
This skill is an instruction-only template for creating new skills and is internally consistent and low-risk: it asks for no installs or secrets. Notes before installing or using it: (1) the package metadata has a minor version mismatch between SKILL.md and _meta.json — likely a housekeeping issue; (2) because it's an authoring tool, review any skills you create with it (especially if you later add scripts or install specs that request credentials or external downloads); (3) although this skill itself requests no credentials or installs, any authored skill may later add them — review those explicitly before enabling. If you want extra safety, test the skill in a restricted/sandbox agent environment first.

Like a lobster shell, security has layers — review code before you run it.

latestvk979bzrrxjzt61trdc3f8ffk5n828xv4
413downloads
0stars
3versions
Updated 1mo ago
v1.1.1
MIT-0

Skill Maker 🔨

Create powerful, reusable skills with structured reasoning.


The Skill Maker Framework

┌─────────────────────────────────────────────────────────────┐
│  SKILL FORGING PROCESS                                      │
├─────────────────────────────────────────────────────────────┤
│  1. INTERPRET  → What capability does this skill need?     │
│  2. DESIGN    → Structure, resources, trigger conditions   │
│  3. FORGE     → Write SKILL.md, create resources          │
│  4. TEST      → Verify triggers, check quality            │
│  5. POLISH    → Refine based on testing                   │
└─────────────────────────────────────────────────────────────┘

Decision Tree: What Are We Building?

INTENT
    │
    ├── Brand new skill ──→ Start from Step 1
    │
    ├── Replace existing ──→ 
    │       └── Read old first, then improve
    │
    └── Clone & modify ──→ 
            └── Copy, rename, customize

Step 1: Interpret

The Core Questions

QuestionYour Answer
What does this skill DO?[Capability]
Who asks for it?[User triggers]
What's the DOMAIN?[Topic area]
How COMPLEX is it?Simple/Medium/Complex

Self-Check: Interpretation

  • Can I describe the skill in one sentence?
  • Do I know what phrases would trigger it?
  • Is this truly a new capability?

Step 2: Design

Complexity Decision

COMPLEXITY LEVEL
    │
    ├── Simple ──→ SKILL.md only
    │       └── One capability, clear steps
    │
    ├── Medium ──→ SKILL.md + references/
    │       └── Needs docs to reference
    │
    └── Complex ──→ SKILL.md + scripts/ + references/
            └── Needs executable code

Directory Structure

skill-name/
├── SKILL.md              # Required: name, description, body
├── scripts/              # Optional: executable code
├── references/          # Optional: detailed docs
└── assets/              # Optional: templates, files

Writing Triggers

Users typically say:

  • "I need to [action]"
  • "How do I [task]?"
  • "Help me with [domain]"
  • "Can you [capability]?"

Formula for description:

"[What it does]. Use when: (1) [situation 1], (2) [situation 2], (3) [situation 3]."

Example:

"Fetch weather data from wttr.in. Use when: (1) User asks about weather, (2) User wants forecast, (3) User asks temperature in [city]."

Self-Check: Design

  • Name follows convention (lowercase, hyphens)?
  • Description has clear triggers?
  • I know which resources to include?

Step 3: Forge

SKILL.md Template

Copy this template for your skill:

---
name: my-skill
description: "[What it does]. Use when: (1) [trigger 1], (2) [trigger 2], (3) [trigger 3]."
---

# My Skill

## When This Skill Activates
This skill triggers when user wants to [capability].

## The [Domain] Framework

| Step | Action |
|------|--------|
| 1 | [What to do] |
| 2 | [What to do] |
| 3 | [What to do] |

## Workflow

### Step 1: [Name]
[What to do and why]

### Step 2: [Name]
[What to do and why]

### Decision Point
- If [condition]: do [A]
- If [condition]: do [B]

## Common Scenarios

### Scenario 1: [Case]
[What to do]

### Scenario 2: [Case]
[What to do]

## Troubleshooting

### Problem: [Error]
- Cause: [why]
- Fix: [how]

## Quick Reference

| Task | Action |
|------|--------|
| [Task 1] | [Command/Step] |
| [Task 2] | [Command/Step] |

Content Patterns

PatternUse For
Numbered stepsSequential workflows
Decision treeBranching logic
TablesQuick reference
Code blocksExamples
Error sectionsTroubleshooting

Progressive Disclosure

IN SKILL.MD (< 500 lines):
├── Core workflow (must-know)
├── Key examples (most common)
└── Quick reference

IN REFERENCES/:
├── Detailed documentation
├── API specs
├── Edge cases
└── Extended examples

Self-Check: Forge

  • Frontmatter complete (name + description)?
  • Body has reasoning framework?
  • Self-check prompts included?
  • Resources properly structured?

Step 4: Test

Trigger Testing

Read your description and ask:

Description: "[your description]"

Would this match user saying:
- "[trigger phrase 1]"? → YES/NO
- "[trigger phrase 2]"? → YES/NO
- "[trigger phrase 3]"? → YES/NO

Self-Check: Test

  • Does description match likely user phrases?
  • Is the skill findable via search?
  • Are there clear steps to follow?
  • Does it include error handling?

Step 5: Polish

Refinement Loop

Use the skill → Notice issues → Fix → Use again
    ↑                                    │
    └────────────────────────────────────┘

Common Fixes

ProblemSolution
Won't triggerAdd more "Use when:" triggers
Too longMove details to references/
ConfusingAdd example scenarios
Missing casesAdd troubleshooting section

Self-Check: Polish

  • Tested on real task?
  • User feedback incorporated?
  • Ready for regular use?

Versioning Guide

When to Bump Version

Change TypeVersion BumpExample
Bug fix, no new features1.0.0 → 1.0.1v1.0.1
New feature, backward compatible1.0.1 → 1.1.0v1.1.0
Breaking changes1.1.0 → 2.0.0v2.0.0

Changelog Format

## Version 1.1.0

### Added
- New feature X

### Changed
- Improved Y

### Fixed
- Bug Z

Self-Check: Versioning

  • Did I increment the version correctly?
  • Is changelog updated?
  • Is this a breaking change?

Metadata Best Practices

Frontmatter Fields

---
name: my-skill
description: "[What it does]. Use when: (1) [trigger 1], (2) [trigger 2]."
version: 1.0.0
changelog: "[Brief summary of changes]"
metadata:
  clawdbot:
    emoji: "🔨"           # Emoji for the skill
    category: "creation"  # Category (research/coding/utility/etc)
    requires:
      bins: ["curl"]      # Required system binaries
      python: ["requests"] # Optional Python packages
---

Emoji Selection

CategoryEmojiExamples
Research🔬deep-research-pro, paper-compare
Coding💻code
Creation🔨skill-forge
Utilitysurge
Weather🌤️weather
Discovery🔍find-skills
Media🎞️video-frames
Files📄pdf

Category Tags

CategoryWhen to Use
researchResearch, analysis, comparisons
codingCode-related tasks
utilityTools, downloads, file operations
creationBuilding new things
communicationMessaging, notifications
mediaVideo, audio, images

Requirements Metadata

metadata:
  clawdbot:
    requires:
      bins: ["ffmpeg", "curl"]       # System binaries
      python: ["requests", "pandas"] # Python packages
      node: ["typescript"]           # Node packages
    os: ["linux", "darwin", "win32"] # Supported OS

Self-Check: Metadata

  • Is frontmatter complete?
  • Is emoji appropriate for category?
  • Are requirements listed?
  • Is version correct?

Why This Works

The Skill Logic Pattern

Based on research (SkillsBench 2026):

  1. Reasoning framework → Agent knows HOW to think, not just WHAT to do
  2. Decision trees → Agent can handle different scenarios
  3. Self-checks → Agent validates its work
  4. Progressive disclosure → Context-efficient

The Goldilocks Principle

"2-3 focused modules beat exhaustive documentation"

Keep it:

  • ✅ Complete enough to be useful
  • ✅ Concise enough to fit in context
  • ✅ Structured enough to guide reasoning

Example: Forging a Weather Skill

Step 1: Interpret

  • What: Fetch weather from wttr.in
  • Triggers: "weather in [city]", "temperature", "forecast"
  • Domain: Weather data

Step 2: Design

  • Complexity: Simple (just API calls)
  • Structure: SKILL.md only
  • Name: weather

Step 3: Forge

---
name: weather
description: "Get weather data. Use when: (1) User asks weather, (2) User wants forecast, (3) User asks temperature."
---

# Weather

## Reasoning

1. EXTRACT → Location from request
2. FETCH → Call wttr.in API
3. PARSE → Extract temp, conditions
4. PRESENT → Format for user

Step 4-5: Test & Polish

  • Add more triggers ("sunny?", "rain?")
  • Add error handling (wrong city, no network)
  • Add presentation templates

Why This Works

Based on research (SkillsBench 2026), skills with reasoning frameworks perform better because they give agents a thinking structure, not just steps to follow.

Made with Skill Maker 🔨

Comments

Loading comments...