Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Code QC

v1.0.0

Run a structured quality control audit on any codebase. Use when asked to QC, audit, review, or check code quality for a project. Supports Python, TypeScript...

0· 675·2 current·2 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (code quality audit) matches the included materials: SKILL.md describes running tests, linters, type checks and saving baselines, and the repository includes helper scripts (import_check.py, syntax_check.py, docstring_check.py) used by those phases. No unrelated credentials, binaries, or install steps are declared.
Instruction Scope
Instructions explicitly tell the agent to run project test suites, import modules, run linters, run smoke tests, and inspect git state and config files (e.g., .qc-config.yaml). Those actions are expected for a QC tool but have an important runtime implication: importing modules and running tests may execute arbitrary project code (including network I/O, filesystem changes, or side effects). The SKILL.md itself does not instruct exfiltration or contacting unknown external endpoints, but several phases (smoke tests, API/UI checks, running test suites) will execute project code and potentially reach external resources unless the operator restricts the environment.
Install Mechanism
There is no install spec (instruction-only skill). The documentation instructs use of external tools (ruff, eslint, mypy, gdtoolkit, pytest, jest, etc.) and gives pip/npx commands to install them. That is normal, but the skill assumes those tools are available and will invoke them; nothing in the package downloads arbitrary code during installation.
Credentials
The skill declares no required environment variables, credentials, or config paths. SKILL.md references common CI/environment detection variables for detection purposes (e.g., CI, GITHUB_ACTIONS, KUBERNETES_SERVICE_HOST) but does not require or exfiltrate them. The requested access (filesystem, git state, running tests/imports) is proportional to a code-audit tool.
Persistence & Privilege
Flags indicate no 'always: true' or other elevated persistence. The skill is user-invocable and may be run autonomously per platform defaults, which is expected for skills. It does not request to modify other skills or global agent configuration.
Assessment
This skill appears coherent for auditing codebases. Important safety notes before you run it: 1) Many phases intentionally import project packages and run tests/smoke-tests — those actions execute the project's code (which can perform network calls, modify files, or run arbitrary commands). Only run this on code you trust or inside an isolated environment (container/VM) with restricted network and file permissions. 2) Avoid `--fix` or automatic fix modes on unreviewed repos (they apply code changes). 3) Inspect any .qc-config.yaml and the included helper scripts (import_check.py, docstring_check.py, syntax_check.py) if you need assurance about behavior — import_check.py uses importlib to import modules (normal for import checks, but see point 1). 4) If you plan to run on untrusted code, run in CI-like sandbox with no secrets mounted and limited outbound network access. If you want, I can highlight exact lines in the helper scripts that cause code execution and suggest a safe sandbox command to run the audit.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d0j9fcq59mj1xptpbyyqyhx81c352
675downloads
0stars
1versions
Updated 3h ago
v1.0.0
MIT-0

Code QC

Structured quality control audit for codebases. Delegates static analysis to proper tools (ruff, eslint, gdlint) and focuses on what AI adds: semantic understanding, cross-module consistency, and dynamic smoke test generation.

Quick Start

  1. Detect project type (read the profile for that language)
  2. Load .qc-config.yaml if present (for custom thresholds/exclusions)
  3. Run the 8-phase audit (or subset with --quick)
  4. Generate report with verdict
  5. Save baseline for future comparison

Configuration (.qc-config.yaml)

Optional project-level config for monorepos and custom settings:

# .qc-config.yaml
thresholds:
  test_failure_rate: 0.05    # >5% = FAIL, 0-5% = WARN, 0% = PASS
  lint_errors_max: 0         # Max lint errors before FAIL
  lint_warnings_max: 50      # Max warnings before WARN
  type_errors_max: 0         # Max type errors before FAIL (strict by default)

exclude:
  dirs: [vendor, third_party, generated]
  files: ["*_generated.py", "*.pb.go"]

changed_only: false          # Only check git-changed files (CI mode)
fail_fast: false             # Stop on first failure
quick_mode: false            # Only run Phase 1, 3, 3.5, 6

languages:
  python:
    min_coverage: 80
    ignore_rules: [T201]     # Allow print in this project
  typescript:
    strict_mode: true        # Require tsconfig strict: true
    ignore_rules: []         # eslint rules to ignore
  gdscript:
    godot_version: "4.2"

Execution Modes

ModePhases RunUse Case
Full (default)All 8 phasesThorough audit
--quick1, 3, 3.5, 6Fast sanity check
--changed-onlyAll, filteredCI on pull requests
--fail-fastAll, stops earlyFind first issue fast
--fix3 with autofixApply automatic fixes

Phase Overview

#PhaseWhatTools
1Test SuiteRun existing tests + coveragepytest --cov / jest --coverage
2Import IntegrityVerify all modules loadscripts/import_check.py
3Static AnalysisLint with proper toolsruff / eslint / gdlint
3.5Type CheckingStatic type verificationmypy / tsc --noEmit / (N/A for GDScript)
4Smoke TestsVerify business logic worksAI-generated per project
5UI/FrontendVerify UI components loadFramework-specific
6File ConsistencySyntax + git statescripts/syntax_check.py + git
7DocumentationDocstrings + docs accuracyscripts/docstring_check.py

Phase Details

Phase 1: Test Suite

Run the project's test suite with coverage. Auto-detect the test runner:

pytest.ini / pyproject.toml [tool.pytest] → pytest --cov
package.json scripts.test → npm test (or npx vitest --coverage)
Cargo.toml → cargo test
project.godot → (GUT if present, else manual)

Record: total, passed, failed, errors, skipped, duration, coverage %.

Verdict contribution:

  • No tests found → SKIP (not FAIL; project may be config-only)
  • Failure rate = 0% → PASS
  • Failure rate ≤ threshold (default 5%) → WARN
  • Failure rate > threshold → FAIL

Coverage reporting (Python):

pytest --cov=<package> --cov-report=term-missing --cov-report=json

Phase 2: Import Integrity (Python/GDScript)

Python: Run scripts/import_check.py against the project root.

GDScript: Verify scene/preload references are valid (see gdscript-profile.md).

Critical vs Optional Import Classification

Use these heuristics to classify import failures:

PatternClassificationRationale
__init__.py, main.py, app.py, cli.pyCriticalCore entry points
Module in src/, lib/, or top-level packageCriticalCore functionality
*_test.py, test_*.py, conftest.pyOptionalTest infrastructure
Modules in examples/, scripts/, tools/OptionalAuxiliary code
Import error mentions cuml, triton, tensorrtOptionalHardware-specific
Import error mentions missing system libOptionalEnvironment-specific
Dependency in [project.optional-dependencies]OptionalDeclared optional

Phase 3: Static Analysis

Do NOT use grep. Use the language's standard linter.

Standard Mode

# Python
ruff check --select E722,T201,B006,F401,F841,UP,I --statistics <project>

# TypeScript  
npx eslint . --format json

# GDScript
gdlint <project>

Fix Mode (--fix)

When --fix is specified, apply automatic corrections:

# Python — safe auto-fixes
ruff check --fix --select E,F,I,UP <project>
ruff format <project>

# TypeScript
npx eslint . --fix

# GDScript
gdformat <project>

Important: After --fix, re-run the check to report remaining issues that couldn't be auto-fixed.

Phase 3.5: Type Checking (NEW)

Run static type analysis before proceeding to runtime checks.

Python:

mypy <package> --ignore-missing-imports --no-error-summary
# or if pyproject.toml has [tool.pyright]:
pyright <package>

TypeScript:

npx tsc --noEmit

GDScript: Godot 4 has built-in static typing but no standalone checker. Estimate type coverage manually:

# Find untyped declarations
grep -rn "var \w\+ =" --include="*.gd" .       # Untyped variables
grep -rn "func \w\+(" --include="*.gd" . | grep -v ":"  # Untyped functions

Use the estimate_type_coverage() function from gdscript-profile.md to calculate coverage per file:

# From gdscript-profile.md
def estimate_type_coverage(gd_file: str) -> float:
    """Count typed vs untyped declarations."""
    # See full implementation in gdscript-profile.md

Also check for @warning_ignore annotations which may hide type issues.

Record: Total errors, categorized by severity.

Phase 4: Smoke Tests (Business Logic)

Test backend/core functionality — NOT UI components (that's Phase 5).

API Discovery Heuristics:

  1. Entry points: Look for main(), cli(), app, create_app(), __main__.py
  2. Service layer: Find classes/modules named *Service, *Manager, *Handler
  3. Public API: Check __all__ exports in __init__.py
  4. FastAPI/Flask: Find route decorators (@app.get, @router.post)
  5. CLI: Find typer/click @app.command() decorators
  6. SDK: Look for client classes, public methods without _ prefix

For each discovered API, generate a minimal test:

def smoke_test_user_service():
    """Test UserService basic CRUD."""
    from myproject.services.user import UserService
    svc = UserService(db=":memory:")
    user = svc.create(name="test")
    assert user.id is not None
    fetched = svc.get(user.id)
    assert fetched.name == "test"
    return "PASS"

Guidelines:

  • Import + instantiate + call one method with minimal valid input
  • Use in-memory/temp resources (:memory:, tempdir)
  • Each test < 5 seconds
  • Catch exceptions, report clearly

Phase 5: UI/Frontend Verification

Test UI components separately from business logic.

FrameworkTest Method
Gradiofrom project.ui import create_ui (no launch())
Streamlitstreamlit run app.py --headless exits cleanly
PyQt/PySideSet QT_QPA_PLATFORM=offscreen, import widget modules
Reactnpm run build succeeds
Vuenpm run build succeeds
GodotScene files parse without error, required scripts exist
CLI--help on all subcommands returns 0

Boundary: Phase 4 tests "does the logic work?" Phase 5 tests "does the UI render?"

Phase 6: File Consistency

Run scripts/syntax_check.py — compiles all source files to verify no syntax errors.

Note: Phase 2 (Import Integrity) tests runtime import behavior including initialization code. Phase 6 tests static syntax correctness. Both are needed: a file can have valid syntax but fail to import (e.g., missing dependency), or vice versa (syntax error in a module that's never imported).

Check git state:

git status --short      # Should be clean (or report uncommitted changes)
git diff --check        # No conflict markers

Phase 7: Documentation

Run scripts/docstring_check.py (now checks __init__.py by default).

Also verify:

  • README exists and is non-empty
  • Key docs (CHANGELOG, CONTRIBUTING) exist if referenced
  • No stale TODO markers in docs claiming completion

Verdict Logic

# Calculate test failure rate
failure_rate = test_failures / total_tests

# Default thresholds (override in .qc-config.yaml)
FAIL_THRESHOLD = 0.05  # 5%
WARN_THRESHOLD = 0.00  # 0%
TYPE_ERRORS_MAX = 0    # Default: strict (any type error = FAIL)

# Verdict determination
if any([
    failure_rate > FAIL_THRESHOLD,
    critical_import_failure,
    type_check_errors > thresholds.type_errors_max,  # Configurable threshold
    lint_errors > thresholds.lint_errors_max,
]):
    verdict = "FAIL"
elif any([
    0 < failure_rate <= FAIL_THRESHOLD,
    optional_import_failures > 0,
    lint_warnings > thresholds.lint_warnings_max,
    missing_docstrings > 0,
    smoke_test_failures > 0,
]):
    verdict = "PASS WITH WARNINGS"
else:
    verdict = "PASS"

Baseline Comparison

Save results to .qc-baseline.json:

{
  "timestamp": "2026-02-15T15:00:00Z",
  "commit": "abc123",
  "verdict": "PASS WITH WARNINGS",
  "config": {
    "mode": "full",
    "thresholds": {"test_failure_rate": 0.05}
  },
  "phases": {
    "tests": {"total": 134, "passed": 134, "failed": 0, "coverage": 87.5},
    "imports": {"total": 50, "failed": 0, "optional_failed": 1, "critical_failed": 0},
    "types": {"errors": 0, "warnings": 5},
    "lint": {"errors": 0, "warnings": 12, "fixed": 8},
    "smoke": {"total": 14, "passed": 14},
    "docs": {"missing_docstrings": 3}
  }
}

On subsequent runs, report delta:

Tests:      134 → 140 (+6 ✅)
Coverage:   87% → 91% (+4% ✅)
Type errors: 0 → 0 (✅)
Lint warnings: 12 → 5 (-7 ✅)

Report Output

Generate in 3 formats:

  1. Markdown (qc-report.md) — full detailed report for humans
  2. JSON (.qc-baseline.json) — machine-readable for CI/comparison
  3. Summary (chat message) — 10-line digest for Discord/Slack

Summary Format Example

📊 QC Report: my-project @ abc123
Verdict: ✅ PASS WITH WARNINGS

Tests:    134/134 passed (100%) | Coverage: 87%
Types:    0 errors
Lint:     0 errors, 12 warnings
Imports:  50/50 (1 optional failed)
Smoke:    14/14 passed

⚠️ Warnings:
- 3 missing docstrings
- 12 lint warnings (run with --fix)

Language-Specific Profiles

Read the appropriate profile before running:

  • Python: references/python-profile.md
  • TypeScript: references/typescript-profile.md
  • GDScript: references/gdscript-profile.md
  • General (any language): references/general-profile.md

Comments

Loading comments...