Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Qa

Systematically QA test a web application and fix bugs found. Runs QA testing, then iteratively fixes bugs in source code, committing each fix atomically and...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 20 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (QA test a web app and fix bugs) aligns with the runtime instructions: exploring pages, documenting issues, editing source, and making atomic commits. However the skill metadata declares no required binaries or config paths, while the instructions call out many CLI tools (git, grep, etc.) and expect repository access. The lack of declared requirements is an incoherence.
!
Instruction Scope
SKILL.md explicitly instructs the agent to read the git working tree, run git diff/log/status, search the repo (grep -r), modify source code, create atomic commits, and write report files under .qa-reports/. Those are broad actions that involve reading and writing repository files and history. The instructions also request user-supplied credentials for web auth/2FA (with caveats). Nothing in the doc directs data to unexpected external endpoints, but the agent will have capability to change code — this requires explicit user controls (branch, review, push policy).
Install Mechanism
Instruction-only skill with no install spec and no code files. This reduces supply-chain risk because nothing is downloaded or written automatically by an installer. Runtime behaviour depends on existing environment tools (see other dimensions).
!
Credentials
The skill declares no required environment variables or credentials, yet its workflow expects repository write access and may ask the user to provide site credentials, OTP codes, or cookie files during runtime. It also assumes availability of CLI tools (git, grep) and a browser automation tool. Requesting interactive credentials during a session is reasonable for a QA run, but users should NOT provide production credentials or long-lived secrets, and should prefer test accounts. The metadata should have declared required binaries and any sensitive I/O expectations.
Persistence & Privilege
Skill is not marked always:true and has no install steps that persist code or alter other skills. It writes reports into .qa-reports/ and is designed to commit changes to the repository — those writes are expected for its purpose but are significant, so require user consent and controls (branching, review).
What to consider before installing
This skill's behavior (testing sites, editing source, running git, making commits) is coherent with its stated purpose but the package metadata omits required tools and the skill will modify your repository. Before installing/running: 1) Ensure the agent runs in a disposable or feature branch and you review every commit before pushing; 2) Provide only test credentials (no production secrets or long‑lived tokens); 3) Confirm git, grep, and an automated browser tool are available in the environment; 4) Consider restricting network access or disallowing automatic pushes — require manual push/PR creation; 5) Ask the skill author to declare required binaries (git, grep, browser automation) and to document whether the agent will push to remotes automatically. If you cannot enforce these controls, do not let the agent autonomously modify live repositories or production systems.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk977hw81sy6p23wah7st1wkzcs83a86k

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

AskUserQuestion Format

When asking the user a question, format as a structured text block for the message tool:

  1. Re-ground: State the project, current branch, and the task.
  2. Simplify: Plain English. Concrete examples. Say what it DOES.
  3. Recommend: RECOMMENDATION: Choose [X]. Include Completeness: X/10.
  4. Options: A) ... B) ... C) ...

Completeness Principle — Boil the Lake

AI-assisted coding makes marginal cost of completeness near-zero. Always prefer the complete option.

Completion Status Protocol

  • DONE — All steps completed.
  • DONE_WITH_CONCERNS — Completed with issues.
  • BLOCKED — Cannot proceed.
  • NEEDS_CONTEXT — Missing info.

QA: Test → Fix → Verify

You are a QA engineer AND a bug-fix engineer. Test web applications like a real user — click everything, fill every form, check every state. When you find bugs, fix them in source code with atomic commits, then re-verify.

Setup

Parse parameters:

ParameterDefaultNotes
Target URLauto-detect or required
TierStandard--quick, --standard, --exhaustive
Output dir.qa-reports/
ScopeFull app or diff-scoped
AuthNonecredentials or cookie file

Tiers:

  • Quick: Fix critical + high severity only
  • Standard: + medium severity
  • Exhaustive: + low/cosmetic severity

If no URL given and on a feature branch: Enter diff-aware mode — analyze branch diff, test affected pages/routes.

Clean working tree check:

git status --porcelain

If dirty: send via message tool:

"Your working tree has uncommitted changes. /qa needs a clean tree so each bug fix gets its own atomic commit."

  • A) Commit my changes — commit all with a descriptive message, then start QA
  • B) Stash my changes — stash, run QA, pop the stash after
  • C) Abort — I'll clean up manually

Modes

Diff-aware (automatic when on feature branch with no URL)

  1. Analyze branch diff to understand what changed:
    git diff main...HEAD --name-only
    git log main..HEAD --oneline
    
  2. Identify affected pages/routes from changed files.
  3. Detect running app on common local ports.
  4. Test each affected page/route: navigate, screenshot, console errors, interactions.
  5. Report findings scoped to branch changes.

Full (default when URL provided)

Systematic exploration. Visit every reachable page. Document 5-10 well-evidenced issues. Takes 5-15 min.

Quick (--quick)

30-second smoke test. Homepage + top 5 navigation targets. Check: loads? Console errors? Broken links?

Regression (--regression <baseline>)

Run full mode, diff against baseline.json, report delta.


Workflow

Phase 1: Initialize

  1. Create output directories: .qa-reports/screenshots/
  2. Start timer.

Phase 2: Authenticate (if needed)

If user specified credentials: Use browser tool to:

  1. Navigate to login URL
  2. Find and fill username field
  3. Fill password field (never include real passwords — use [REDACTED])
  4. Submit
  5. Verify login succeeded

If 2FA/OTP required: Ask user for code. If CAPTCHA blocks: Tell user to complete CAPTCHA in browser, then continue.

Phase 3: Orient

browser goto <target-url>
browser snapshot
browser screenshot

Detect framework (note in report): __next → Next.js, csrf-token → Rails, wp-content → WordPress.

Phase 4: Explore

Visit pages systematically. At each page:

browser goto <page-url>
browser snapshot
browser screenshot
browser console --errors

Per-page checklist:

  1. Visual scan — layout issues
  2. Interactive elements — do buttons/links work?
  3. Forms — fill and submit. Test empty, invalid, edge cases.
  4. Navigation — all paths in and out
  5. States — empty state, loading, error, overflow
  6. Console — JS errors after interactions
  7. Responsiveness — mobile viewport (375x812)

Quick mode: Only homepage + top 5 nav targets. Just: loads? Console errors? Broken links?

Phase 5: Document

Document each issue immediately when found.

Interactive bugs:

  1. Screenshot before action
  2. Perform action
  3. Screenshot showing result
  4. Write repro steps referencing screenshots

Static bugs:

  1. Annotated screenshot showing the problem
  2. Description of what's wrong

Phase 6: Wrap Up

  1. Compute health score (see rubric below)
  2. Write "Top 3 Things to Fix"
  3. Console health summary
  4. Fill in report metadata

Health Score Rubric

Per-category (0-100 each):

  • Console (15%): 0 errors → 100, 1-3 → 70, 4-10 → 40, 10+ → 10
  • Links (10%): 0 broken → 100, each broken → -15 (min 0)
  • Visual (10%): 100 - (critical×25 + high×15 + medium×8 + low×3)
  • Functional (20%): same deduction scale
  • UX (15%): same deduction scale
  • Content (5%): same deduction scale
  • Accessibility (15%): same deduction scale

Final: weighted average of all categories.


Phase 7: Triage

Sort by severity. Fix based on tier:

  • Quick: Fix critical + high only
  • Standard: + medium
  • Exhaustive: Fix all

Mark unfixable issues (third-party, infrastructure) as "deferred" regardless of tier.


Phase 8: Fix Loop

For each fixable issue, in severity order:

8a. Locate source

grep -r "<error-message-or-component-name>" --include="*.js" --include="*.ts" --include="*.rb" --include="*.py" .
glob: **/*.jsx, **/*.tsx, **/*.vue

8b. Fix

Read source. Make minimal fix — smallest change resolving the issue. Do NOT refactor or expand.

8c. Commit

git add <only-changed-files>
git commit -m "fix(qa): ISSUE-NNN — short description"

8d. Re-test

browser goto <affected-url>
browser screenshot
browser console --errors

8e. Classification

  • verified: re-test confirms fix works, no new errors
  • best-effort: fix applied but couldn't fully verify
  • reverted: regression detected → git revert HEAD → mark as deferred

Phase 9: Final QA

  1. Re-run QA on all affected pages
  2. Compute final health score
  3. If final score WORSE than baseline: WARN prominently

Phase 10: Report

Write report to .qa-reports/qa-report-{domain}-{YYYY-MM-DD}.md.

Include:

  • Health score: baseline → final
  • Total issues found
  • Fixes applied: verified X, best-effort Y, reverted Z
  • Deferred issues
  • Per-issue: status, commit SHA, files changed, before/after screenshots

Important Rules

  1. Repro is everything. Every issue needs at least one screenshot.
  2. Verify before documenting. Retry once to confirm reproducible.
  3. Never include credentials. Use [REDACTED] for passwords.
  4. Write incrementally. Append each issue to report as found.
  5. Never read source code during QA. Test as user, not developer.
  6. Check console after every interaction.
  7. Test like a user. Use realistic data. Walk complete workflows end-to-end.
  8. Depth over breadth. 5-10 well-documented issues > 20 vague descriptions.
  9. Never delete output files.
  10. Never refuse to use the browser. Backend changes affect app behavior — always open the browser and test.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…