{"skill":{"slug":"ml-model-eval-benchmark","displayName":"Ml Model Eval Benchmark","summary":"Compare model candidates using weighted metrics and deterministic ranking outputs. Use for benchmark leaderboards and model promotion decisions.","tags":{"latest":"0.1.0"},"stats":{"comments":0,"downloads":426,"installsAllTime":3,"installsCurrent":2,"stars":0,"versions":1},"createdAt":1772136624524,"updatedAt":1777525422215},"latestVersion":{"version":"0.1.0","createdAt":1772136624524,"changelog":"- Initial release of ml-model-eval-benchmark.\n- Supports weighted metric evaluation and deterministic model ranking.\n- Enables benchmark leaderboard generation and model promotion decisions.\n- Includes scripts and guides for consistent evaluation workflows.\n- Enforces standardized metric names, scales, and explicit weighting documentation.","license":null},"metadata":null,"owner":{"handle":"0x-professor","userId":"s17bg4xm5b50d70ncct7b9shxh83n6jb","displayName":"Muhammad Mazhar Saeed","image":"https://avatars.githubusercontent.com/u/160357695?v=4"},"moderation":null}