Tech Stack Evaluation

PassAudited by VirusTotal on May 8, 2026.

Overview

Type: OpenClaw Skill Name: tech-stack-evaluation Version: 1.0.0 The skill bundle provides a structured framework for an AI agent to evaluate a project's technology stack and suggest improvements. The included shell commands in SKILL.md are standard, read-only operations (find, wc, grep) used for code metrics and maintainability audits, with no evidence of malicious intent, data exfiltration, or unauthorized execution.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The agent may examine local source files or file statistics in the current project to make recommendations.

Why it was flagged

The skill documents local shell commands for inspecting project structure and code metrics. These commands are read-only and directly support the stated maintainability assessment purpose.

Skill content
Use the `codebase-survey` skill's maintainability audit to produce evidence:

```bash
# File size audit
find . -type f ... | xargs wc -l | sort -rn | head -20

# Method/function count audit
grep -c "def " app/core/allocation.py
```
Recommendation

Use the skill from the intended project directory and review any proposed commands if the repository contains sensitive or unrelated files.