dev-test

v1.0.0

Structured development and testing SOP for implementing code changes. Covers codebase study, minimal focused implementation, test writing patterns, test exec...

0· 130·0 current·0 all-time
byBijin@sliverp

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sliverp/dev-test.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "dev-test" (sliverp/dev-test) from ClawHub.
Skill page: https://clawhub.ai/sliverp/dev-test
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install dev-test

ClawHub CLI

Package manager switcher

npx clawhub@latest install dev-test
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (development & testing SOP) matches the SKILL.md content: repository study, focused implementation, tests, running test suites, and diff review. Nothing in the metadata or instructions requires cloud credentials, unrelated binaries, or other capabilities beyond typical repository access.
Instruction Scope
Runtime instructions are prescriptive and narrowly scoped to reading repo files (CONTRIBUTING.md, config files, modules, tests), writing focused code changes, running tests (pytest/npm/go test), and reviewing diffs. The guidance does not direct the agent to read unrelated system files, exfiltrate data, or call external endpoints beyond common test tooling.
Install Mechanism
No install spec or code files are present; this is instruction-only. Nothing will be downloaded or written to disk by the skill itself, so there is minimal installation risk.
Credentials
The skill does not declare or require any environment variables, credentials, or config paths. The operations it describes (reading repo files, running tests) reasonably require only repository access and standard developer tooling.
Persistence & Privilege
always is false and there is no request to modify other skills or persist configuration. The skill does not ask for permanent presence or elevated privileges beyond normal agent execution rights.
Assessment
This skill is an instructional SOP and is coherent with its stated purpose. It does not request credentials or install code, so direct technical risk from the skill content is low. Before enabling: ensure the agent/runtime that will execute these instructions has appropriate permissions (so it cannot access secrets or private systems you don't want exposed), and confirm your repository does not contain hardcoded secrets (the SOP instructs you to check diffs for secrets, which is good practice). Because the skill source is 'unknown' and unverified, treat it as advisory: review the SKILL.md yourself to ensure the workflow aligns with your team's processes and restrict autonomous agent execution if you want to avoid any automated edits or test runs without human oversight.

Like a lobster shell, security has layers — review code before you run it.

latestvk975xbh8s47zd2krdpvhstrc0983hb8h
130downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Dev Test — Development & Testing SOP

Overview

A structured workflow for implementing code changes with quality built in. Covers the full cycle from understanding the codebase to having a verified, reviewable diff.

Use cases: Bug fixes, feature development, refactoring, open-source contributions, any code change that needs to be correct and maintainable.

Workflow

Phase 1: Study Before Coding

Never write code before understanding the context. This phase prevents wasted effort and bad designs.

1a. Understand the project conventions

Check for and read these files (if they exist):

  • CONTRIBUTING.md — contribution guidelines
  • .editorconfig — formatting rules
  • pyproject.toml / package.json / Makefile — project config, linting, formatting
  • tox.ini / .flake8 / .eslintrc — code style rules

1b. Study the area of change

  • Read the module(s) you'll be modifying
  • Trace the execution path around the bug/feature
  • Understand data flow: what goes in, what comes out
  • Note dependencies: what other modules interact with this code

1c. Study existing tests

  • Find the test directory structure: tests/, __tests__/, test/
  • Note the test framework: pytest, jest, go test, JUnit, etc.
  • Study naming conventions: test_feature.py, feature.test.ts, feature_test.go
  • Note fixture/setup patterns used
  • Check for test utilities, factories, or mocks

1d. Draft a mental model

Before writing code, articulate:

  • What needs to change (specific behavior)
  • Where in the code (files, functions, classes)
  • How you'll change it (approach)
  • What could go wrong (edge cases, regressions)

Phase 2: Implement

Core principles

  1. Minimal, focused changes

    • Fix the bug / add the feature. Nothing else.
    • Avoid unrelated formatting changes, import reordering, or drive-by refactors.
    • If you spot something else to fix, note it for a separate commit/PR.
  2. Follow existing patterns

    • Match the codebase's style, not your preferred style
    • Use the same naming conventions, indentation, comment style
    • If the project uses snake_case, don't introduce camelCase
  3. Add comments for non-obvious logic

    • "Why" comments, not "what" comments
    • Explain trade-offs, workarounds, and intentional decisions
  4. Design for quality

    PrincipleMeaningExample
    Defense-in-depthLayer multiple protectionsValidate input at API + service + DB layer
    Backward compatibilityDon't break existing behaviorAdd new params with defaults
    Graceful degradationHandle missing featuresPlatform-specific code falls back safely
    ExtensibilityPrefer composable designsPlugin/middleware over hardcoded switch
    Single responsibilityOne function = one jobExtract logic instead of growing functions

Phase 3: Write Tests

Test categories (implement in order)

  1. Fix verification — prove the bug is fixed / feature works

    test_feature_handles_null_input()        # The exact scenario from the issue
    
  2. Edge cases — boundary conditions

    test_feature_with_empty_string()
    test_feature_with_max_length_input()
    test_feature_with_special_characters()
    
  3. Error handling — invalid inputs, failure paths

    test_feature_raises_on_invalid_type()
    test_feature_returns_none_on_missing_key()
    
  4. Regression — existing behavior preserved

    test_existing_behavior_unchanged()
    test_other_module_still_works()
    

Test writing guidelines

  • One assertion per test (ideally) — makes failures easy to diagnose
  • Descriptive namestest_oauth_token_refreshes_when_expired not test_token
  • Arrange-Act-Assert pattern:
    def test_feature():
        # Arrange
        input_data = create_test_data()
    
        # Act
        result = feature(input_data)
    
        # Assert
        assert result.status == "success"
    
  • Use fixtures for reusable setup
  • Mock external dependencies — network calls, file system, databases
  • Test behavior, not implementation — don't assert on internal state

Phase 4: Run Tests

Progressive testing strategy

# 1. Run only your new/modified tests first (fast feedback)
python -m pytest tests/test_my_feature.py -v --tb=short
# or: npm test -- --testPathPattern=my_feature
# or: go test ./pkg/my_feature/... -v

# 2. Run the full test module/directory
python -m pytest tests/ -v --tb=short

# 3. Run the entire test suite (before committing)
python -m pytest --tb=short
# or: npm test
# or: go test ./...

Handling test results

ScenarioAction
All pass ✅Proceed to diff review
Your tests failFix the code, re-run
Pre-existing failuresNote them, don't fix (out of scope)
Flaky tests (pass/fail randomly)Run 3x to confirm flakiness, note in PR
Tests you can't run (need env/infra)Note in PR, explain what you tested manually

Phase 5: Review the Diff

Before committing, review every line of your diff.

# Overview of what changed
git diff --stat

# Full diff
git diff

# If already staged
git diff --cached

Diff review checklist

  • Every change is intentional (no accidental edits)
  • No debug prints, TODO comments, or temporary code left in
  • No secrets, tokens, or personal paths hardcoded
  • No unrelated formatting changes
  • All new functions/classes have appropriate docstrings/comments
  • Test coverage looks adequate for the changes
  • File additions are in the right directories

Phase 6: Commit

# Stage specific files (never use `git add .`)
git add path/to/modified_file.py
git add tests/test_new_feature.py

# Verify staged files
git diff --cached --stat

# Commit with conventional message
git commit -m "fix(module): short description of what was fixed

Longer explanation of why, if non-obvious.

Addresses #issue_number"

Commit message format

{type}({scope}): {concise description}

{body: what and why, not how}

{footer: issue refs, test results, breaking changes}
TypeWhen
fixBug fix
featNew feature
refactorCode restructure (no behavior change)
testAdding/fixing tests only
docsDocumentation changes only
choreBuild/tooling/dependency changes

Output

  • Committed code changes + tests on feature branch
  • All tests passing
  • Clean, reviewable diff

Tips

  • This skill works standalone for any development task.
  • In a contribution pipeline, it follows repo-setup and feeds into pr-pilot.
  • For large features, repeat Phase 2-5 in small increments rather than one big change.
  • When pair-programming with AI: have the AI study the codebase (Phase 1) before asking it to implement anything.

Comments

Loading comments...