llama.cpp Benchmark

v1.0.0

Run llama.cpp benchmarks on GGUF models to measure prompt processing (pp) and token generation (tg) performance. Use when the user wants to benchmark LLM mod...

0· 20·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's scripts and SKILL.md match the stated purpose: finding/building llama.cpp and running llama-bench. One minor inconsistency: the package metadata declares no required binaries, but the build/benchmark scripts assume tools like git, cmake, a C/C++ toolchain, and typical UNIX utilities (find, grep, make). These are expected for building llama.cpp but should be declared.
Instruction Scope
Runtime instructions and scripts are narrowly scoped to cloning/updating the llama.cpp repository, building it, and running llama-bench on local GGUF files. The benchmark script searches the user's home directory and /DATA to locate llama-bench (find ~ /DATA ...) — this is local-only scanning (no remote upload) but may traverse many user files. The build script runs git fetch/pull/clone (network access to GitHub) and compiles code locally; it may prompt interactively and will write under the chosen build directory.
Install Mechanism
No remote arbitrary binary blobs or obscure download hosts are used; the build script clones from github.com/ggerganov/llama.cpp — a known upstream repository — and builds locally via cmake. No extract-from-unknown-URL operations detected.
Credentials
The skill declares no environment variables or credentials. It references an optional LLAMA_BACKEND env var in docs (expected). It does not request or use tokens/secret env vars. Git operations are against a public GitHub repo and should not require credentials.
Persistence & Privilege
The skill is not always-enabled and does not alter other skills or system-wide configuration. It creates/clobbers files under the chosen build directory (default ~/Repo/llama.cpp) and output directory (default ./benchmark_results), which is expected for a build/benchmark tool.
Assessment
This skill appears to do what it says: it will clone/update the llama.cpp GitHub repo and build llama-bench, then run local benchmarks on GGUF files. Before installing: 1) Be prepared to install and run build tools (git, cmake, make/ninja, a C/C++ compiler) — the metadata doesn't list these dependencies. 2) Expect the build to use network access to GitHub and to write files under ~/Repo/llama.cpp and whatever output directory you choose. 3) The benchmark script searches your home directory and /DATA to find llama-bench; this only reads local paths but can traverse many files and may take time. 4) If you need to be extra cautious, review the upstream repository (https://github.com/ggerganov/llama.cpp) and run the build inside a sandbox or VM, and ensure you have sufficient disk space and GPU drivers for the chosen backend.

Like a lobster shell, security has layers — review code before you run it.

latestvk9772t3jt2kwek3tjmkbjt0t0s849hr7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments