Pytest Code Review

v1.1.1

Reviews pytest test code for async patterns, fixtures, parametrize, and mocking. Use when reviewing test_*.py files, checking async test functions, fixture u...

0· 164·1 current·1 all-time
byKevin Anderson@anderskev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for anderskev/pytest-code-review.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Pytest Code Review" (anderskev/pytest-code-review) from ClawHub.
Skill page: https://clawhub.ai/anderskev/pytest-code-review
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install pytest-code-review

ClawHub CLI

Package manager switcher

npx clawhub@latest install pytest-code-review
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (pytest test review for async, fixtures, parametrize, mocking) align with the provided SKILL.md and reference docs. There are no unrelated env vars, binaries, or install steps that would be out-of-scope for a reviewer.
Instruction Scope
Runtime instructions focus on enumerating and analyzing test_*.py and conftest.py files and consulting the bundled reference docs. The guidance limits findings to scoped files and does not instruct reading unrelated system files, environment variables, or sending data externally.
Install Mechanism
No install spec or code files are provided — this is an instruction-only skill, so nothing is downloaded or written to disk during install.
Credentials
The skill requests no environment variables, credentials, or config paths. The analysis it prescribes operates on repository test files only, which is proportionate to its stated purpose.
Persistence & Privilege
Flags: always is false (not force-included) and model invocation is allowed (default). Neither setting is excessive for this kind of skill and the skill does not request long-term persistence or modify other skills' configuration.
Assessment
This skill appears coherent and limited to reviewing pytest test files; it requires no credentials or installs. Before enabling it, confirm you are comfortable allowing the agent access to the repository files you want reviewed (it will enumerate and read test_*.py and conftest.py). If you do not want automated agents to run this skill autonomously, keep autonomous invocation disabled for your agent or review results manually. If you need stronger assurance, inspect the SKILL.md and reference files yourself — they are bundled and readable and contain the full runtime instructions.

Like a lobster shell, security has layers — review code before you run it.

latestvk974kn6tafzmq97pra1b9kre5985beq8
164downloads
0stars
2versions
Updated 6d ago
v1.1.1
MIT-0

Pytest Code Review

Quick Reference

Issue TypeReference
async def test_*, AsyncMock, await patternsreferences/async-testing.md
conftest.py, factory fixtures, scope, cleanupreferences/fixtures.md
@pytest.mark.parametrize, DRY patternsreferences/parametrize.md
AsyncMock tracking, patch patterns, when to mockreferences/mocking.md

Review gates

Work in order. Do not assert pytest-specific problems until each applicable gate passes.

  1. Scoped filesPass when: You list every test_*.py and any conftest.py you will cite; no findings for files outside that list.
  2. Async vs syncPass when: Per scoped file, you note whether it uses async def test_* / await; if yes, open references/async-testing.md before criticizing async usage.
  3. FixturesPass when: If shared setup matters, you name the conftest.py path(s) or state none; for yield fixtures, confirm cleanup exists before claiming resource leaks.
  4. patch / mocksPass when: For any patch or mock critique, you give the import path where the symbol is used (call site), or mark N/A; open references/mocking.md when mocking is central to the review.
  5. FindingsPass when: Each finding includes a file path and line(s) or test node id, not a generic rule restatement.

Review Checklist

  • Test functions are async def test_* for async code under test
  • AsyncMock used for async dependencies, not Mock
  • All async mocks and coroutines are awaited
  • Fixtures in conftest.py for shared setup
  • Fixture scope appropriate (function, class, module, session)
  • Yield fixtures have proper cleanup in finally block
  • @pytest.mark.parametrize for similar test cases
  • No duplicated test logic across multiple test functions
  • Mocks track calls properly (assert_called_once_with)
  • patch() targets correct location (where used, not defined)
  • No mocking of internals that should be tested
  • Test isolation (no shared mutable state between tests)

When to Load References

  • Reviewing async test functions → async-testing.md
  • Reviewing fixtures or conftest.py → fixtures.md
  • Reviewing similar test cases → parametrize.md
  • Reviewing mocks and patches → mocking.md

Review Questions

  1. Are all async functions tested with async def test_*?
  2. Are fixtures properly scoped with appropriate cleanup?
  3. Can similar test cases be parametrized to reduce duplication?
  4. Are mocks tracking calls and used at the right locations?

Comments

Loading comments...