Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Tech Stack Evaluator
v2.1.1Technology stack evaluation and comparison with TCO analysis, security assessment, and ecosystem health scoring. Use when comparing frameworks, evaluating te...
⭐ 0· 1.5k·4 current·4 all-time
byAlireza Rezvani@alirezarezvani
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (tech comparisons, TCO, security assessment) align with the provided scripts (comparator, TCO, migration, ecosystem, security). No required env vars or binaries are declared, which is proportionate to the stated purpose. However, SKILL.md examples imply the scripts will fetch live GitHub/npm metrics or be usable via CLI flags (e.g., `--technology react`), while the visible modules (e.g., ecosystem_analyzer.py, format_detector.py, migration_analyzer.py) are written as library classes/functions that accept data dicts rather than showing a network fetcher or CLI argument parsing — so there is a mild capability mismatch between documentation and code.
Instruction Scope
SKILL.md instructs running scripts with command-line flags and suggests automated retrieval of ecosystem/security metrics. The included source snippets mostly define classes and pure computation functions that expect structured input rather than performing network calls or having CLI entry points. The instructions therefore overstate automation (implied live data collection). This is not directly dangerous, but it is an incoherence: the agent or user may expect the skill to fetch external data automatically when the code appears to require pre-supplied metrics.
Install Mechanism
No install spec is provided and the skill is instruction-only plus local Python scripts. That keeps disk/write risk low. There are no external downloads, URL installs, or package manager installs in the repository metadata.
Credentials
The skill declares no required environment variables, credentials, or config paths — which is appropriate given the documented behavior. No files or variables appear to be requested that would be disproportionate to the task.
Persistence & Privilege
The skill is not set to always:true and does not request elevated persistence. It contains only local scripts and references sample input/data assets; there is no evidence it modifies other skills or global agent settings.
What to consider before installing
This package appears to be a legitimate tech-evaluation tool, but there is a mismatch between the SKILL.md (which shows convenient CLI usage and implies automatic fetching of GitHub/npm/security data) and the provided Python modules (which look like library components that expect structured input). Before using it in production or granting it network/agent privileges: 1) Inspect the remaining scripts (especially security_assessor.py and stack_comparator.py) for any network calls, subprocess usage, or hidden endpoints (look for imports like requests, urllib, subprocess). 2) Confirm whether CLI wrappers or data-fetching code are present or must be added — the examples may be aspirational. 3) Run the scripts in an isolated environment with the provided sample inputs to verify behavior. 4) If you expect live metric collection, ask the author for documentation on authentication and endpoints; do not provide credentials until you confirm what is contacted and why. If you want, I can scan the remaining truncated files for network/subprocess calls and make the assessment more precise.Like a lobster shell, security has layers — review code before you run it.
latestvk9726s8pd5sgby4y7pmh0g6dbn82knnt
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
