Spec Writer

Generate structured implementation spec documents for coding projects or features. Use when a user provides a requirement, feature idea, bug description, or...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 26 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description match the instructions: generating structured implementation specs. The runtime instructions only ask for project context (package.json, README, architecture docs, CI config, etc.) and external references when provided — all of which are reasonable for producing a useful spec.
Instruction Scope
The instructions authorize the agent to 'collect information from available sources' and to read project files and docs if accessible, then save SPEC.md in the project. This is appropriate for a spec generator, but broad phrasing could let an agent scan more of the workspace than strictly necessary. It does not instruct reading unrelated system files or exfiltrating data, but you should confirm scope (limit to project directory) before allowing autonomous runs.
Install Mechanism
No install spec and no code files are provided. Being instruction-only means nothing is written to disk by the skill itself beyond the spec it is asked to create at runtime.
Credentials
The skill requests no environment variables, credentials, or config paths. It mentions fetching GitHub issue details only 'if referenced' — fetching private issues would require credentials, but the skill does not declare or request any tokens itself.
Persistence & Privilege
always:false and no requests to modify other skills or global agent settings. The skill instructs saving a SPEC.md in the project (expected behavior) and does not demand permanent presence or elevated privileges.
Assessment
This skill appears coherent and limited to its stated purpose, but review a few operational points before enabling it: 1) Confirm the agent is only allowed to read the intended project directory — the SKILL.md phrase 'collect information from available sources' is broad and could permit scanning unrelated files if not constrained. 2) If you expect the skill to fetch private GitHub issues, provide explicit tokens or paste the issue content yourself — the skill doesn't request credentials, so automatic fetching of private resources won't work unless the agent/platform already has access. 3) Review any generated SPEC.md before handing it to coding agents to ensure it contains no sensitive snippets copied from the repo. 4) If you allow autonomous invocation, consider limiting its workspace/network permissions so the agent can only access the repo and services you intend.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk973xj6qc124a73r14av6wjsnn8302sn

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Spec Writer — Structured Implementation Spec Generator

Generate high-quality, AI-agent-friendly spec documents from vague requirements.

When to Use

  • User has a requirement, feature idea, bug report, or GitHub issue
  • User wants a structured document before coding starts
  • User says "write a spec", "plan this out", "spec this feature", etc.
  • dev-workflow Phase 2 needs a detailed spec (can be called as a sub-step)

Output

A single Markdown spec document saved to the project directory (default: SPEC.md or spec/<name>.md).

Workflow

Step 1: Gather Context

Collect information from available sources. Do not ask the user for things you can find yourself.

From the user's input:

  • What to build and why
  • Any explicit constraints or preferences

From the project (if accessible):

  • Tech stack (package.json, Cargo.toml, pyproject.toml, go.mod, etc.)
  • Project structure (directory layout)
  • Existing architecture docs (llmdoc/, README, ARCHITECTURE.md, etc.)
  • Code style patterns (sample existing code)
  • Test setup (test framework, where tests live, how to run them)
  • Git workflow (branch conventions, CI config)

From external sources (if referenced):

  • GitHub issue details (title, body, comments, labels)
  • Linked docs, designs, or API references

Step 2: Draft the Spec

Use the spec template at references/spec-template.md. Read it before generating.

Fill every section based on gathered context. Key principles:

  • Be specific, not vague. "React 18 with TypeScript and Vite" not "React project."
  • Focus on what and why first. User stories and success criteria anchor everything.
  • Include real commands. Full commands with flags, not just tool names.
  • Show, don't describe. One code example beats three paragraphs of explanation.
  • State what NOT to do. Explicit exclusions prevent agent drift.
  • Use three-tier boundaries. ✅ Always / ⚠️ Ask first / 🚫 Never.

Step 3: Review with User

Present the draft to the user. Common discussion points:

  • Are the user stories complete? Any missing scenarios?
  • Is the technical approach acceptable? Alternative architectures?
  • Are boundaries correct? Anything too strict or too loose?
  • Are verification criteria testable and sufficient?
  • Is the task breakdown granularity right?

Revise until the user confirms. Mark status as "✅ Confirmed" when approved.

Step 4: Save and Deliver

Save the confirmed spec to the project. Suggested locations:

  • Single feature: SPEC.md in project root
  • Multiple specs: spec/<feature-name>.md
  • With dev-workflow: the spec replaces the requirement doc in Phase 2

Tell the user the spec is ready and suggest next steps:

  • Hand to a coding agent (dev-workflow, Claude Code, Codex, etc.)
  • Use as a reference for manual implementation
  • Share with team for review

Adapting to Project Scale

Small task (bug fix, small feature):

  • Skip or compress sections that don't apply
  • Focus on: Objective, Changes, Boundaries, Verification
  • Total spec: ~50-100 lines

Medium task (feature, refactor):

  • Use full template
  • Total spec: ~100-250 lines

Large task (new module, major feature):

  • Use full template with extended task breakdown
  • Consider splitting into multiple specs (one per component/module)
  • Include architecture diagram description or data model
  • Total spec: ~200-400 lines

Principles

These come from industry best practices (GitHub's study of 2,500+ agent files, Anthropic's context engineering research, and practical spec-driven development patterns):

  1. Spec is the source of truth — It persists across sessions, anchoring the agent when context gets long or sessions restart.

  2. Structure for parseability — Clear Markdown headings, consistent format. AI models handle well-structured text better than free-form prose.

  3. Six core areas — Commands, Testing, Project Structure, Code Style, Git Workflow, Boundaries. Use as a completeness checklist.

  4. Three-tier boundaries — ✅ Always do (proceed without asking) / ⚠️ Ask first (need human approval) / 🚫 Never do (hard stop). More effective than flat rule lists.

  5. Modularity — Each section should be independently useful. A coding agent working on the backend doesn't need the frontend spec section in its context.

  6. Living document — Update the spec when decisions change. An outdated spec is worse than no spec.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…