Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Aa Benchmarking Framework

Composite scoring and efficiency frontier analysis for LLM evaluation — combines multiple quality dimensions (accuracy, latency, cost, consistency) into a si...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 30 · 1 current installs · 1 all-time installs
byNissan Dookeran@nissan
MIT-0
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, description, and planned capabilities match a benchmarking/analysis skill. Requesting python3 as a runtime makes sense for data processing/visuals. However, the metadata's primaryEnv set to 'production' is unexplained and disproportionate for a pure benchmarking helper; the SKILL.md also references integration with LangFuse (an external tracing service) but does not declare any required credentials or network access.
!
Instruction Scope
The SKILL.md is a draft and contains only high-level planned capabilities, not concrete runtime instructions. It mentions ingesting trace data from LangFuse and exporting results, which implies reading external data and making outbound network requests, yet the metadata claims outbound networking is false and no environment variables or endpoints are declared. Because runtime behavior is underspecified, it's unclear what data the skill will read, what endpoints it will contact, or what credentials it will require.
Install Mechanism
Instruction-only skill with no install spec and no code files. That minimizes immediate disk/write risk. Declaring python3 as a required binary is reasonable for a planned implementation; otherwise there is nothing being fetched or installed.
!
Credentials
No environment variables are declared, yet the metadata sets primaryEnv to 'production' and the text promises LangFuse integration (which normally requires an API key). This mismatch means either the skill will need secrets/network access that are not declared, or the manifest is incorrect; both are red flags for incomplete or inconsistent security posture.
Persistence & Privilege
always is false and there are no install hooks or instructions to modify agent/system configuration. The skill does not request persistent elevated presence in its current form.
What to consider before installing
This skill is currently a draft with plausible goals, but several inconsistencies make it risky to enable for production use. Before installing or granting the agent access, ask the author to: (1) provide concrete runtime instructions and example commands/scripts; (2) declare any required environment variables (e.g., LangFuse API key) and justify why 'primaryEnv' is set to 'production'; (3) clarify whether outbound networking is required and update metadata accordingly; and (4) supply the implementation (code or install spec) so you can review exactly what will run. If you must test now, do so in an isolated environment where no sensitive credentials or production data are available.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk979ensjz18jmk9dk3dq2e2yah83rkyv

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Primary envproduction

SKILL.md

Last used: 2026-03-24 Memory references: 1 Status: Active

AA Benchmarking Framework

STATUS: DRAFT — This skill is planned but not yet fully implemented.

What This Does

Provides a systematic framework for multi-dimensional LLM evaluation using composite scoring, efficiency frontier analysis, and Pareto optimality. Rather than ranking models on a single metric, it helps identify which models are non-dominated — i.e., no other model is better on all dimensions simultaneously. Designed for teams that need principled model selection beyond simple leaderboard rankings.

Planned Capabilities

  • Composite scoring with configurable dimension weights (accuracy, latency, cost, recall, F1)
  • Pareto frontier detection across any two or more evaluation dimensions
  • Radar/spider chart visualisation for multi-dimensional comparison
  • Statistical significance testing across benchmark runs (t-test, Mann-Whitney U)
  • Integration with LangFuse for trace-based evaluation data ingestion
  • Export to CSV/JSON for downstream analysis

When To Use

  • Choosing between 3+ LLM providers on competing objectives (e.g. GPT-4o vs Claude 3.5 vs Gemini)
  • Building an evaluation dashboard for recurring model benchmarks
  • Presenting model selection rationale to stakeholders with visual evidence
  • Running efficiency frontier analysis to identify cost-optimal models for a quality threshold

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…