vibe-check

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: vibe-check Version: 0.2.1 The OpenClaw AgentSkills 'vibe-check' skill is a code auditing tool designed to identify 'vibe coding sins' and security vulnerabilities in user-provided code. The skill's scripts (`vibe-check.sh`, `analyze.sh`, `report.sh`, `git-diff.sh`, `common.sh`) demonstrate robust shell scripting practices, including `set -euo pipefail`, proper quoting of variables, and safe path resolution. Crucially, `analyze.sh` uses `python3 -c "import json, sys; print(json.dumps(prompt))"` to safely JSON-escape file content before sending it to LLM APIs, mitigating prompt injection risks from the analyzed code. The `SECURITY.md` and `README.md` clearly state the skill's read-only nature, human-in-the-loop fix suggestions, and transparently disclose network behavior (sending code to LLMs). The `test_samples/bad_api.py` file, while containing severe vulnerabilities like RCE via `eval()` and SQL injection, is a test case for the skill's detection capabilities and is not executed by the skill itself. There is no evidence of intentional harmful behavior or self-exploitation.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private source code, and any secrets inside that code, could leave the local environment even when the user has not set Anthropic or OpenAI API keys.

Why it was flagged

The prompt built by this script includes analyzed file contents, and this fallback sends that prompt through the OpenClaw LLM gateway whenever the CLI is available.

Skill content
result=$(echo "$prompt" | openclaw llm --raw 2>/dev/null) || result=""
Recommendation

Make LLM use explicit, add a documented local-only/no-LLM mode, and clearly disclose the OpenClaw gateway destination before scanning sensitive repositories.

What this means

A user could run the skill on sensitive code believing it will stay local, while the OpenClaw LLM fallback may still transmit code content.

Why it was flagged

This main instruction is incomplete because `scripts/analyze.sh` also tries `openclaw llm` when available; users may infer that unsetting provider keys prevents outbound LLM use.

Skill content
If no LLM API key is set, the tool falls back to heuristic analysis (less accurate but still useful).
Recommendation

Align SKILL.md, README, SECURITY.md, and runtime behavior; explicitly tell users when any LLM path will be used and how to force heuristic-only analysis.

What this means

Configured provider keys authorize external API calls and may incur cost while sending selected code for analysis.

Why it was flagged

The script uses Anthropic or OpenAI API keys for expected LLM analysis; the visible artifacts do not show hardcoded keys or intentional key logging.

Skill content
-H "x-api-key: ${ANTHROPIC_API_KEY}" ... -H "Authorization: Bearer ${OPENAI_API_KEY}"
Recommendation

Use limited-scope provider keys, avoid scanning secret-heavy repositories with LLM mode enabled, and unset keys when local heuristic analysis is desired.

NoteMedium Confidence
ASI01: Agent Goal Hijack
What this means

A hostile repository could make the report or suggested patches misleading if the model follows instructions hidden in code comments.

Why it was flagged

User/repository code is embedded directly into an LLM prompt, so malicious comments in analyzed code could try to influence the generated findings or fixes.

Skill content
**File:** \`${file_path}\` ... ${file_content} ... Respond with ONLY a JSON object
Recommendation

Treat analyzed code as untrusted prompt content, keep strict JSON parsing, and require human review before acting on generated findings or patches.

What this means

Users relying on registry metadata may miss that the skill depends on local command-line tools and can use LLM credentials/network calls.

Why it was flagged

The README documents runtime tools and optional credentials, while the registry metadata lists no required binaries, env var declarations, or install spec.

Skill content
Requirements: bash (4.0+), python3 (stdlib only — no pip installs), curl (for LLM API calls), ANTHROPIC_API_KEY or OPENAI_API_KEY
Recommendation

Declare runtime binaries, optional credential names, and network capability metadata so users can assess the environment requirements before installation.