{"skill":{"slug":"hle-benchmark-evolver","displayName":"Hle Benchmark Evolver","summary":"Runs HLE-oriented benchmark reward ingestion and curriculum generation for capability-evolver. Use when the user asks to optimize Humanity's Last Exam score,...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":744,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1771187782833,"updatedAt":1777525170149},"latestVersion":{"version":"1.0.0","createdAt":1771187782833,"changelog":"- Initial release of hle-benchmark-evolver skill for OpenClaw.\n- Enables ingestion of HLE benchmark report JSONs to drive curriculum and evolution workflows.\n- Supports easy-first curriculum queues, focus area suggestion, and immediate result summaries.\n- Offers shell commands for both single-run and fully automated evolution-feedback loops.\n- Always outputs compact, structured JSON summarizing key progress metrics and curriculum focus.","license":null},"metadata":null,"owner":{"handle":"wanng-ide","userId":"s176gjsdvgc0ce9sr8b6s9yzmx84mv9j","displayName":"WANGJUNJIE","image":"https://avatars.githubusercontent.com/u/32323900?v=4"},"moderation":{"isSuspicious":true,"isMalwareBlocked":false,"verdict":"suspicious","reasonCodes":["suspicious.dangerous_exec","suspicious.llm_suspicious","suspicious.vt_suspicious"],"summary":"Detected: suspicious.dangerous_exec, suspicious.llm_suspicious, suspicious.vt_suspicious","engineVersion":"v2.4.5","updatedAt":1777525170149}}