Qa Gate

v1.1.0

Final quality validation gate for any artifact before human review. Run this skill on documents, skills, PRDs, blog posts, or code artifacts to validate fact...

0· 224·1 current·1 all-time
byCorbin Breton@corbin-breton·duplicate of @corbin-breton/keats-qa-gate

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for corbin-breton/qa-gate.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Qa Gate" (corbin-breton/qa-gate) from ClawHub.
Skill page: https://clawhub.ai/corbin-breton/qa-gate
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install qa-gate

ClawHub CLI

Package manager switcher

npx clawhub@latest install qa-gate
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the instructions: the SKILL.md defines a read-only quality gate for artifacts. It does not request unrelated binaries, credentials, or installs.
Instruction Scope
Instructions are focused on inspection and producing a pass/fail report. They explicitly state the skill is read-only and should not modify artifacts. Two points to note: (1) the gate expects the agent to 'read the entire file' — this requires the agent to have access to the artifact content or workspace; (2) verification of factual claims implies searching external sources (web access) although no network policy is declared. Both are coherent with the purpose but are operational considerations.
Install Mechanism
No install spec and no code files — instruction-only skill. Lowest installation risk; nothing is written to disk by an installer.
Credentials
The skill declares no required env vars, credentials, or config paths. The lack of credential requests is appropriate for a read-only QA gate.
Persistence & Privilege
The skill is not marked always:true and does not request persistent privileges. It does instruct the agent to write reports to a relative path (qa-gate/YYYY-MM-DD-<artifact-slug>.md) — this is reasonable for its purpose but means the agent needs write access to the workspace/evidence directory.
Assessment
This skill is instruction-only and appears coherent for a final QA pass. Before installing/using it: (1) Confirm the agent will be given only the artifact content you intend it to read — do not pass files containing secrets or private data. (2) Be aware the instructions expect the agent to verify factual claims, which likely requires web/network access; if you want to restrict outbound access, you should enforce that separately. (3) The skill writes a report file to a relative workspace path; ensure that the workspace location is acceptable and that generated reports will not leak sensitive content. (4) If you need stronger controls, consider adding explicit constraints to SKILL.md (e.g., 'do not access the network' or 'mask any secrets before ingestion').

Like a lobster shell, security has layers — review code before you run it.

latestvk979vfswkk4z8nvrjn5cmxqygx83xpq0
224downloads
0stars
3versions
Updated 4w ago
v1.1.0
MIT-0

QA Gate

Final release gate for any artifact before human review. Every document, skill, blog post, PRD, or code output should pass this gate before the principal sees it.

This is not a code review skill. It is a read-only release gate that determines whether an artifact is ready to move forward. QA Gate inspects artifacts but does not modify them.

When to Use

  • After any ralphy loop completes a PRD
  • Before presenting any deliverable to the principal
  • When self-reviewing documents, code, skills, or blog posts
  • As the final step before publishing to ClawHub or Gumroad
  • When asked to "QA gate this," "validate before publish," "final check," or run a "quality gate"

Optional Mode

  • --dual: Use cross-model QA validation when the artifact is high-stakes, ambiguous, or worth the extra cost/latency for a second independent quality pass.

Process

Step 1: Read the artifact completely

Read the entire file. Do not skim. Understand the structure, voice, and intent.

Step 2: Validate against 6 dimensions

1. Factual Accuracy (Sequential Claim Verification) Extract every verifiable claim from the artifact into a mental checklist. Then verify each independently — do not batch-assess. For each claim:

  • Is it verifiable from a known source or self-evident from context?
  • If it references a citation (paper title, arXiv ID, finding), does the citation match?
  • If it describes a technical procedure, is the procedure feasible as described?
  • If it references a tool, API, or version, is the reference accurate and current?

Score: count of verified claims / total claims. If verification rate < 90%, flag for revision.

2. Tone & Voice Consistency

  • Does the document maintain its intended voice throughout?
  • No tonal drift between sections?
  • No marketing fluff, tutorial-speak, or filler?
  • Appropriate for the target audience (agent, human, or both)?

3. Completeness

  • No placeholders (TODO, TBD, FIXME, PLACEHOLDER, [FILL IN])?
  • All sections referenced in TOC/structure are present?
  • All promised content is delivered?
  • No orphaned references or dead links?

4. Structural Integrity

  • Heading hierarchy is clean (no skipped levels)?
  • Code blocks are properly fenced and syntactically valid?
  • Section anchors work?
  • Back-links resolve to valid targets?
  • Markdown renders correctly?

5. Operational Soundness (for technical documents)

  • Procedures are implementable as described?
  • Configuration formats match the actual system?
  • Commands and scripts are executable?
  • Edge cases are addressed?

6. Sensitive Data Check

  • No personal information (real names, schedules, addresses)?
  • No API keys, tokens, or secrets?
  • No internal-only references that shouldn't be public?
  • Examples use fictional/generic data?

Step 3: Produce gate verdict

Output must include a clear gate result:

PASS — ready for human review

or

PASS WITH FIXES
- MINOR [location]: issue description

or

FAIL
- CRITICAL [location]: issue description
- MAJOR [location]: issue description
- MINOR [location]: issue description

Step 4: If FAIL, fix and re-validate

Fix all CRITICAL and MAJOR issues. Re-run the gate. Only present to principal after PASS or PASS WITH FIXES.

Integration with PRD Workflows

Add to any PRD as a verification step:

### D) QA Gate
- [ ] Run QA Gate on all major artifacts produced in this PRD
- [ ] All artifacts must PASS before marking PRD complete
- [ ] Fix any CRITICAL or MAJOR issues identified

Output Format

Write validation report to: qa-gate/YYYY-MM-DD-<artifact-slug>.md (relative to your workspace or evidence directory)

Use this structure:

# QA Gate Report: <artifact name>

## Gate Result
PASS | PASS WITH FIXES | FAIL

## Artifact Type
Document | Skill | PRD | Blog Post | Code Artifact | Other

## Findings
- SEVERITY [location]: issue description

## Summary
Brief explanation of why the artifact passed, passed with fixes, or failed.

Quality Standards

  • CRITICAL: Blocks release. Factual errors, security issues, broken functionality.
  • MAJOR: Should fix before release. Missing sections, tone drift, incomplete content.
  • MINOR: Nice to fix. Typos, formatting inconsistencies, style preferences.

A PASS with only MINOR issues is acceptable. CRITICAL or MAJOR = must fix first.

Comments

Loading comments...