Audit Code

v0.1.0

Run a two-pass, multidisciplinary code audit led by a tie-breaker lead, combining security, performance, UX, DX, and edge-case analysis into one prioritized report with concrete fixes. Use when the user asks to audit code, perform a deep review, stress-test a codebase, or produce a risk-ranked remediation plan across backend, frontend, APIs, infra scripts, and product flows.

0· 1.5k·5 current·5 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for swader/agent-skills-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Audit Code" (swader/agent-skills-audit) from ClawHub.
Skill page: https://clawhub.ai/swader/agent-skills-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install swader/agent-skills-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-skills-audit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (multidisciplinary code audit) match the provided SKILL.md, audit-framework, and README. Included artifacts (audit checklist, finding schema) are appropriate. The sync script and README's intent to copy the skill into agent skill directories is consistent with the stated goal of distributing a canonical SKILL.md to local agents.
Instruction Scope
Runtime instructions focus on reading repository code, product context, and producing findings; they explicitly load the local references/audit-framework.md. The workflow asks agents to analyze code paths, invariants, and produce evidence-based findings. There are no directives to read unrelated system files, export secrets, or contact external endpoints.
Install Mechanism
No remote install or package downloads are declared; this is an instruction-only skill with local reference files. The only executable artifact is a benign local sync script that copies/symlinks the repo into user agent skill directories if run; no network fetches or archive extraction are present.
Credentials
The skill requires no environment variables, credentials, or config-path access. The sync script uses $HOME to determine agent directories (expected for its purpose) but does not request secrets or unrelated service tokens.
Persistence & Privilege
The skill does not set always:true and cannot autonomously persist unless the user runs the included sync script. Running that script will create/copy files into ~/.codex, ~/.claude, or ~/.cursor which grants persistent local presence for the skill — this is explicit user-invoked behavior rather than a hidden privilege.
Assessment
This skill appears coherent and contains only local audit guidance and checklists. Before installing/running anything: (1) review SKILL.md and references/audit-framework.md to confirm the audit behavior; (2) inspect scripts/sync-to-agents.sh — it will copy (or symlink) this repo into ~/.codex/.claude/.cursor when you run it, so only run it if you want the skill added to those agent directories; (3) don’t run the sync script as root and verify destination paths are acceptable; (4) because the skill performs code analysis, avoid pointing it at secret-containing paths unless you intend that; and (5) if you plan to let an agent invoke skills autonomously, remember this skill can be invoked by agents but it requests no credentials and contains no remote exfiltration steps.

Like a lobster shell, security has layers — review code before you run it.

latestvk974bz68fprpp3nd2e660asj8180tfmx
1.5kdownloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

Audit Code

Overview

Run an expert-panel audit with strict sequencing and one unified output document. Produce findings first, sorted by severity, with file references, exploit/perf/flow impact, and actionable fixes.

Load references/audit-framework.md before starting the analysis.

Required Inputs

Collect or infer the following:

  • Audit scope: paths, modules, PR diff, or whole repository.
  • Product context: PRD/spec/user stories, trust boundaries, and critical business flows.
  • Runtime context: deployment model, queue/cron/background jobs, traffic profile, data sensitivity, and abuse assumptions.
  • Constraints: timeline, acceptable risk, and preferred remediation style.

If product context is missing, state assumptions explicitly and continue.

Team Roles

Use exactly these roles:

  • Security expert
  • Performance expert
  • UX expert
  • DX expert
  • Edge case master
  • Tie-breaker team lead

The tie-breaker lead resolves conflicts, prioritizes issues, and produces the final single report.

Workflow

Follow this sequence every time:

  1. Build Context Read code + product flows. Identify assets, entry points, high-risk operations, privileged actions, external dependencies, and "failure hurts" journeys.

  2. Build Invariant Coverage Matrix Before specialist pass 1, map critical invariants to every mutating path (HTTP routes, webhooks, async jobs, scripts):

  • Data-link invariants: multi-table relationships that must remain consistent.
  • Auth lifecycle invariants: disable/revoke semantics for sessions/tokens/API keys.
  • Input/transport invariants: validation, content-type policy, body-size/parse behavior.
  • Shape invariants: trees/graphs must reject cycles where applicable. Treat missing parity across equivalent paths as a finding candidate.
  1. Pass 1 Specialist Reviews Run role-specific analysis in this order:
  • Security
  • Performance
  • UX
  • DX
  • Edge case master Capture findings using the schema in references/audit-framework.md.
  1. Tie-Breaker Reconciliation Resolve disagreements:
  • Decide whether contested items are true issues.
  • Set severity and confidence.
  • Remove duplicates and merge overlapping findings.
  1. Cross-Review Pass 2 After edge-case findings, rerun specialists:
  • Security/Performance/UX/DX reassess prior findings and new edge-triggered scenarios.
  • Edge case master performs a final pass on residual risk after proposed mitigations.
  1. Final Report Publish one document from the tie-breaker lead with:
  • Findings first (ordered by severity, then blast radius, then exploitability).
  • Open questions/assumptions.
  • Remediation plan with priority, owner type, and verification tests.
  • Short executive summary at the end.

Quality Bar

Enforce these requirements:

  • Use concrete evidence with file references and line numbers where available.
  • Include reproduction steps for security/performance/edge findings when feasible.
  • Prefer actionable fixes over abstract advice.
  • Separate confirmed defects from speculative risks.
  • Mark confidence for each finding.
  • Run a cross-route consistency sweep: equivalent endpoints/jobs must enforce equivalent invariants.
  • For each High/Critical finding, include at least one focused regression test/check.

Safety and Policy Guardrails

Apply these guardrails while auditing:

  • Do not provide operational abuse instructions or exploit weaponization details.
  • Evaluate manipulative UX patterns as legal/trust/reputation risk, not as recommended growth tactics.
  • Prioritize user safety, system integrity, and maintainable engineering outcomes.

Output Format

Follow this response structure:

  1. Findings List only validated issues. Use the finding schema in references/audit-framework.md.

  2. Open Questions / Assumptions State missing context that could change priority or validity.

  3. Change Summary Summarize high-impact remediation themes in a few lines.

  4. Suggested Verification List focused tests/checks to confirm each major fix.

Runtime Heuristics

When the target stack is Bun + SQLite, apply the runtime-specific checklist in references/audit-framework.md (Runtime-Specific Heuristics (Bun + SQLite)) before finalizing findings.

Comments

Loading comments...