vibe-check

Audit code for "vibe coding sins" — patterns that indicate AI-generated code was accepted without proper review. Produces a scored report card with fix sugge...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 527 · 3 current installs · 3 all-time installs
byTodd Kuehnl@tkuehnl
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (vibe-check: audit code for 'vibe coding sins') match the included scripts and report generation. The scripts implement LLM-backed analysis, diff-mode, and --fix suggestion generation which are all coherent with the stated purpose. Minor metadata inconsistencies: SKILL.md and some files call out/expect ANTHROPIC_API_KEY or OPENAI_API_KEY and VIBE_CHECK_BATCH_SIZE, but the registry metadata listed no required env vars — those API keys are optional at runtime but necessary for the LLM-powered mode. SKILL.md also lists version 0.1.1 while the registry shows 0.2.1 (harmless but inconsistent).
!
Instruction Scope
The runtime instructions tell the agent to run the included shell scripts which: discover and read repository source files (or git diffs), build prompts containing the full file contents and call external LLM APIs (anthropic/openai) using curl. That means source code (including any secrets or credentials present in files) will be transmitted to third-party APIs when API keys are configured. This behavior is expected for an LLM-based auditor, but it is a data-exfiltration/privacy risk that must be accepted consciously. The scripts otherwise stay within scope (they read files, produce reports, and do not auto-apply fixes or push changes).
Install Mechanism
No install spec is provided (instruction-only skill plus shipped scripts). Nothing in the manifest performs network downloads or extracts external archives. The code is included in the skill bundle and executed locally; no high-risk install steps were found.
Credentials
The skill does not request unrelated credentials. It optionally uses ANTHROPIC_API_KEY or OPENAI_API_KEY (appropriate for an LLM-backed auditor) and a tuning var VIBE_CHECK_BATCH_SIZE. The registry metadata omitted these optional environment variables; the README and scripts clearly document them as optional. Requiring user LLM API keys is proportionate to the tool's stated LLM capability but increases the risk that repository contents are sent to those providers.
Persistence & Privilege
always: false and default agent invocation settings are used. The skill does not request permanent presence, does not attempt to modify other skills or system-wide agent settings, and does not auto-apply patches or perform git operations. It writes a report file only when --output is specified by the user.
Assessment
This skill is internally consistent with its purpose: it reads your repo, runs local heuristics, and — if you configure ANTHROPIC_API_KEY or OPENAI_API_KEY — sends file contents to those third-party LLM endpoints for richer analysis. Before installing/using it, consider: 1) Do not set an LLM API key if you are scanning sensitive repos (secrets, proprietary code) — you can run it in heuristic mode by unsetting those env vars. 2) The tool only emits unified-diff suggestions and does not auto-apply them, but you should manually review any suggested fixes before applying. 3) Confirm you are comfortable with the agent being allowed to run the supplied scripts (they read files and may call network endpoints). 4) Note minor metadata inconsistencies (declared env vars absent from registry metadata; SKILL.md version differs from registry version) — these are not malicious but worth verifying with the publisher if you need strong provenance. If you need more assurance, inspect the included scripts locally and run them in a controlled environment (sandbox or test repo) with LLM keys unset.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.2.1
Download zip
latestvk97a5rxzcjvqfzaqya5nfkqj7h81jjss

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

🎭 Vibe Check

Audit code for "vibe coding" — AI-generated code accepted without proper human review. Get a scored report card with specific findings and fix suggestions.

Trigger

Activate when the user mentions any of:

  • "vibe check"
  • "vibe-check"
  • "audit code"
  • "code quality"
  • "vibe score"
  • "check my code"
  • "review this code for vibe coding"
  • "code review"
  • "vibe audit"

Instructions

1. Determine the Target

Ask the user what code to analyze. Accepted inputs:

  • Single file: app.py, src/utils.ts
  • Directory: src/, ., my-project/
  • Git diff: last N commits, staged changes, or branch comparison

2. Run the Analysis

# Single file or directory
bash "$SKILL_DIR/scripts/vibe-check.sh" TARGET

# With fix suggestions
bash "$SKILL_DIR/scripts/vibe-check.sh" --fix TARGET

# Git diff (last 3 commits)
bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3

# Staged changes with fixes
bash "$SKILL_DIR/scripts/vibe-check.sh" --staged --fix

# Save to file
bash "$SKILL_DIR/scripts/vibe-check.sh" --fix --output report.md TARGET

3. Present the Report

The output is a Markdown report. Present it directly — it's designed to be screenshot-worthy.

Discord v2 Delivery Mode (OpenClaw v2026.2.14+)

When the conversation is happening in a Discord channel:

  • Send a compact summary first (grade, score, file count, top 3 findings), then ask if the user wants the full report.
  • Keep the first message under ~1200 characters and avoid wide Markdown tables in the first response.
  • If Discord components are available, include quick actions:
    • Show Top Findings
    • Show Fix Suggestions
    • Run Diff Mode
  • If components are not available, provide the same follow-ups as a numbered list.
  • Prefer short follow-up chunks (<=15 lines per message) when sending the full report.

Quick Reference

CommandDescription
vibe-check FILEAnalyze a single file
vibe-check DIRScan directory recursively
vibe-check --diffCheck last commit's changes
vibe-check --diff HEAD~5Check last 5 commits
vibe-check --stagedCheck staged changes
vibe-check --fix DIRInclude fix suggestions
vibe-check --output report.md DIRSave report to file

Sin Categories (what it checks)

CategoryWeightWhat It Catches
Error Handling20%Missing try/catch, bare exceptions, no edge cases
Input Validation15%No type checks, no bounds checks, trusting all input
Duplication15%Copy-pasted logic, DRY violations
Dead Code10%Unused imports, commented-out blocks, unreachable code
Magic Values10%Hardcoded strings/numbers/URLs without constants
Test Coverage10%No test files, no test patterns, no assertions
Naming Quality10%Vague names (data, result, temp, x), misleading names
Security10%eval(), exec(), hardcoded secrets, SQL injection

Scoring

  • A (90-100): Pristine code, minimal issues
  • B (80-89): Clean code with minor issues
  • C (70-79): Decent but lazy patterns crept in
  • D (60-69): Needs human attention
  • F (<60): Heavy vibe coding detected

Notes for the Agent

  • The report is the star. Present it in full — it's designed to look great.
  • After presenting, offer to run --fix mode if they didn't already.
  • Suggest the README badge: ![Vibe Score](https://img.shields.io/badge/vibe--score-XX%2F100-COLOR)
  • For large codebases, suggest focusing on specific directories or using --diff mode.
  • If no LLM API key is set, the tool falls back to heuristic analysis (less accurate but still useful).
  • Supported languages (v1): Python, TypeScript, JavaScript only.

References

  • scripts/vibe-check.sh — Main entry point
  • scripts/analyze.sh — LLM code analysis engine (with heuristic fallback)
  • scripts/git-diff.sh — Git diff file extractor
  • scripts/report.sh — Markdown report generator
  • scripts/common.sh — Shared utilities and constants

Examples

Example 1: Audit a Directory

User: "Vibe check my src directory"

Agent runs:

bash "$SKILL_DIR/scripts/vibe-check.sh" src/

Output: Full scorecard with per-file breakdown, category scores, and top findings.

Example 2: Check with Fixes

User: "Review this code for vibe coding and suggest fixes"

Agent runs:

bash "$SKILL_DIR/scripts/vibe-check.sh" --fix src/

Output: Scorecard + unified diff patches for each finding.

Example 3: Git Diff Mode

User: "Check the code quality of my last 3 commits"

Agent runs:

bash "$SKILL_DIR/scripts/vibe-check.sh" --diff HEAD~3

Output: Scorecard focused only on recently changed files.

Files

12 total
Select a file
Select a file to preview.

Comments

Loading comments…