LLM Inference Performance Estimator
v1.0.0Estimate LLM inference performance metrics including TTFT, decode speed, and VRAM requirements based on model architecture, GPU specs, and quantization format.
⭐ 1· 66·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The skill's name/description (LLM inference performance estimator) matches the actions described in SKILL.md: parsing model configs, accepting GPU specs/quant formats, and computing TTFT/throughput/VRAM. It does not request unrelated binaries, credentials, or system config paths.
Instruction Scope
Runtime instructions stay within the stated purpose: they ask for a preset model name or a model config.json (user-pasted content or a local file path) and GPU specs. The only noteworthy behavior is that, if given a local file path, the agent is instructed to read that file to extract fields — which is necessary for the estimator but means the agent will access whatever file path the user supplies. The SKILL.md does not instruct the agent to fetch remote URLs itself (it suggests the user open HF/ModelScope links in a browser and paste the config).
Install Mechanism
No install spec or code files — instruction-only skill. This minimizes risk because nothing is downloaded or written to disk by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or special config paths. The only inputs are user-supplied model config data and GPU specs, which are proportionate to estimation functionality.
Persistence & Privilege
always is false and the skill is user-invocable. disable-model-invocation is default (agent may invoke autonomously), which is the platform default and not excessive here. The skill does not request persistent system-wide changes or access to other skills' configs.
Assessment
This skill appears to do exactly what it says — estimate TTFT, decode speed, and VRAM from model and GPU specs — and it asks for no credentials or installs. A few practical cautions before use:
- If you provide a local file path, the agent will read that file. Do not point it at unrelated sensitive files (e.g., ~/ .ssh, credentials files, or system configs).
- Prefer copying and pasting only the model config.json contents (or sanitizing it) rather than giving a broad directory path. Configs typically do not contain secrets, but double-check before pasting.
- The skill suggests visiting HF/ModelScope URLs in your browser and pasting config text; it does not fetch those URLs itself. If you prefer, provide model parameters manually instead of providing a file.
- No environment variables or cloud credentials are requested, and there is no install step. If you see prompts later asking for secrets or for the skill to fetch remote resources, stop and verify why.
Overall this skill is internally consistent and low-risk for the stated task; follow the above precautions about local file paths and pasted content.Like a lobster shell, security has layers — review code before you run it.
latestvk979qre0523kbz94056qqbh31583gn0m
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
