Write Adr

v1.1.2

Use when you want to generate Architecture Decision Records from this session. Triggers on "write ADRs", "document our decisions", "create decision records",...

0· 142·1 current·1 all-time
byKevin Anderson@anderskev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for anderskev/write-adr.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Write Adr" (anderskev/write-adr) from ClawHub.
Skill page: https://clawhub.ai/anderskev/write-adr
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install write-adr

ClawHub CLI

Package manager switcher

npx clawhub@latest install write-adr
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description matches the actions in SKILL.md: it scans repository context, extracts decisions, asks the user to confirm, and writes ADR files. Required binaries/credentials are none, which is coherent for a repo-local ADR generator.
Instruction Scope
Instructions intentionally read repository state (git branch, recent commits, ls/find of docs directories) and write ADR files under docs/adr/ (and refer to docs/adrs/ elsewhere). This file I/O is expected for the purpose, but there is an inconsistency in directory naming (docs/adrs/ vs docs/adr/) and a dependency on a repository-local script (skills/adr-writing/scripts/next_adr_number.py) that may not exist. The skill also asks subagents to "explore the codebase," which is within scope but means the subagents will read project files for context.
Install Mechanism
No install spec or external downloads — instruction-only — so nothing is written to disk by an installer and no third-party packages are pulled by the skill itself.
Credentials
The skill declares no required environment variables or credentials, which is appropriate. It does reference loading other skills (beagle-analysis:adr-decision-extraction and adr-writing); you should verify you trust those subskills.
Persistence & Privilege
The skill is not always-enabled and requests no elevated agent privileges. It sets disable-model-invocation: true for itself while instructing the agent to launch subagents that do model work — this is a design choice (not a direct risk), but it means the skill relies on other skills/subagents to perform model-driven steps. The skill will write files into the repository (docs/adr/), so expect persistent changes to project files.
Assessment
This skill appears to do what it says: it scans your repo, extracts decisions from the session via subagents, asks you which to document, and writes MADR-formatted ADR markdown files. Before enabling or running it: 1) Confirm the repository contains the expected helper script (skills/adr-writing/scripts/next_adr_number.py) and the target directory (note the SKILL.md inconsistently references docs/adrs/ and docs/adr/), or it may fail or write files to an unexpected location. 2) Be aware subagents will read your repository files (commits, code) to collect context and will create files under docs/adr/ — review generated files before committing/pushing. 3) Verify you trust the referenced subskills (beagle-analysis:adr-decision-extraction and beagle-analysis:adr-writing) because this skill delegates model-driven work to them. 4) The skill does not request credentials or install external code, which reduces risk, but exercise the usual caution when allowing any skill to read and write your repository.

Like a lobster shell, security has layers — review code before you run it.

latestvk97535083ct5dkb8nqdsgbm52584pbk6
142downloads
0stars
2versions
Updated 2w ago
v1.1.2
MIT-0

Write ADR

Generate Architecture Decision Records (ADRs) from decisions made during the current session.

Workflow Overview

  1. Context - Gather repository context and existing ADRs
  2. Extract - Analyze conversation for decisions using a subagent
  3. Confirm - Present decisions to user for selection
  4. Write - Generate ADRs in parallel using subagents
  5. Report - Summarize created files and status
  6. Verify - Validate generated ADRs against Definition of Done

Step 1: Gather Context

# Get current branch and recent commits
git branch --show-current
git log --oneline -5

# Check for existing ADRs
ls docs/adrs/ 2>/dev/null || echo "No ADR directory found"

# Count existing ADRs for numbering
find docs/adrs -name "*.md" 2>/dev/null | wc -l

This context helps the ADR writer:

  • Reference related commits in the ADR
  • Avoid duplicate ADRs for already-documented decisions
  • Determine correct sequence numbering

Step 2: Extract Decisions

Launch a subagent to analyze the current conversation for architectural decisions:

Task(
  description: "Analyze conversation and extract architectural decisions",
  model: "sonnet",
  prompt: |
    Load the skill: Skill(skill: "beagle-analysis:adr-decision-extraction")

    Analyze the conversation for decisions that warrant ADRs:
    - Technology choices, architecture patterns, design trade-offs
    - Rejected alternatives, significant implementation approaches

    Return JSON:
    {
      "decisions": [
        {
          "id": 1,
          "title": "Use PostgreSQL for primary datastore",
          "context": "Brief context about why this came up",
          "decision": "What was decided",
          "alternatives": ["What was considered but rejected"],
          "rationale": "Why this choice was made"
        }
      ]
    }
)

If the subagent returns an empty decisions array, skip to Step 5 with message: "No architectural decisions detected in this session."

Step 3: Confirm with User

Display all extracted decisions with full details, then ask user to select:

## Detected Decisions

### 1. Use PostgreSQL for primary datastore
**Confidence:** high

**Problem:** Need ACID transactions for financial records

**Decision:** PostgreSQL for user data storage

**Alternatives discussed:**
- MongoDB
- SQLite

**Rationale:** ACID compliance, team familiarity, mature ecosystem

**Source:** Discussion about database selection in planning phase

---

### 2. Implement event sourcing for audit trail
**Confidence:** medium

**Problem:** Compliance requires complete audit history

**Decision:** Event sourcing pattern for state changes

**Alternatives discussed:**
- Database triggers
- Application-level logging

**Rationale:** Immutable audit trail, temporal queries, debugging capability

**Source:** Compliance requirements discussion

---

## Selection

Which decisions should I write ADRs for?
- Enter numbers (e.g., "1,2" or "1-2"), "all", or "none" to skip

Important: Always display the full decision details (problem, decision, alternatives, rationale) from the extraction output BEFORE asking for selection. Do not truncate to just title and context.

Parse user response:

  • "all" - Process all decisions
  • "none" or empty - Skip with message "No ADRs will be created."
  • "1,2" or "1-2" - Process specified decisions

Step 4: Write ADRs (Parallel)

Pre-allocate ADR numbers before launching subagents to prevent numbering conflicts:

# Pre-allocate numbers for all confirmed decisions
# Example: If user selected 3 decisions
python skills/adr-writing/scripts/next_adr_number.py --count 3
# Output:
# 0003
# 0004
# 0005

Assign each pre-allocated number to its corresponding decision before launching subagents.

For each confirmed decision, launch an ADR Writer subagent in background with its pre-assigned number:

Task(
  description: "Write ADR for: {decision.title}",
  model: "sonnet",
  run_in_background: true,
  prompt: |
    Load the skill: Skill(skill: "beagle-analysis:adr-writing")

    Write an ADR for this decision:
    ```json
    {decision JSON}
    ```

    **IMPORTANT: Use this pre-assigned ADR number: {assigned_number}**

    Instructions:
    1. Explore codebase for additional context
    2. Write MADR-formatted ADR to docs/adr/
    3. Use the pre-assigned number {assigned_number} - DO NOT call next_adr_number.py
    4. Filename format: {assigned_number}-slugified-title.md
    5. Return created file path
)

Critical: Pass the pre-allocated number to each subagent. Subagents must NOT call next_adr_number.py themselves - this causes duplicate numbers when running in parallel.

All subagents run in parallel. Wait for all to complete before proceeding.

Step 5: Report Results

Collect outputs from all subagents and present summary:

## ADR Generation Complete

| File | Decision | Status |
|------|----------|--------|
| docs/adr/0003-use-postgresql.md | Use PostgreSQL for primary datastore | Draft |

### Next Steps
- Review generated ADRs for accuracy
- Update status from "proposed" to "accepted" when finalized

### Gaps Requiring Investigation
- [List any decisions where subagent noted missing context]

If no decisions were processed:

No ADRs were created. Run this command again after making architectural decisions.

Step 6: Verify Generated ADRs

For each created ADR, validate against Definition of Done:

## Verification Checklist

| ADR | E | C | A | D | R | Status |
|-----|---|---|---|---|---|--------|
| 0003-use-postgresql.md | ✓ | ✓ | ✓ | ⚠ | ✗ | Incomplete |

Legend: E=Evidence, C=Criteria, A=Agreement, D=Documentation, R=Realization

Verification steps:

  1. Open each generated ADR file
  2. Confirm filename follows NNNN-slugified-title.md pattern
  3. Verify YAML frontmatter exists at file start:
    • File MUST begin with ---
    • Contains status: draft (or valid status)
    • Contains date: YYYY-MM-DD (actual date)
    • Ends with --- before title
    • If frontmatter is missing, add it immediately
  4. Review for [INVESTIGATE] prompts - these need follow-up
  5. Verify at least 2 alternatives are documented
  6. Confirm consequences section has both Good and Bad items

If gaps exist:

  • Keep status as draft until gaps are resolved
  • Use [INVESTIGATE] prompts to guide follow-up session
  • Schedule review with stakeholders before changing to accepted

Output Location

ADRs are written to docs/adr/. If no ADR directory exists, create it with an initial 0000-use-madr.md template record.

MADR Format Reference

---
status: draft
date: YYYY-MM-DD
---

# {TITLE}

## Context and Problem Statement

{What is the issue motivating this decision?}

## Decision Drivers

* {driver 1}
* {driver 2}

## Decision Outcome

Chosen option: "{option}", because {reason}.

### Consequences

* Good, because {positive}
* Bad, because {negative}

Comments

Loading comments...