SOTA Tracker (Claw)

v1.0.1

Provides daily-updated, authoritative rankings and metadata of state-of-the-art AI models aggregated from leading sources via JSON, API, or local queries.

1· 1.5k·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Although the registry metadata lacked a short description, the included SKILL.md and many code files describe the same purpose: scrape LMArena/ArtificialAnalysis/HuggingFace, produce JSON/CSV/SQLite exports, run a REST API and an optional MCP server, and provide scripts to embed data into agent files. Required env vars and credentials are none, which is proportionate for a public-data scraper/reader. The code files (scrapers, rest_api, server.py, init_db.py) align with the stated goal.
Instruction Scope
Runtime instructions tell the user to run scrapers, install Playwright (which downloads browsers), run init_db.py, start uvicorn/rest_api or server.py, set up systemd timers / cron jobs, copy scripts into ~/.claude or ~/.cyrus, and add an .mcp.json entry to enable the MCP server. These actions are all relevant to the repo's purpose, but they grant the code permission to write to user home files and run periodically — users should verify what the scripts do before enabling automation.
Install Mechanism
No automatic install spec is provided (instruction-only), which limits silent installs. However the repo expects users to pip install requirements.txt and to run 'playwright install chromium' — that step downloads a browser binary from Playwright distribution servers. The project does not reference obscure download URLs; dependencies and browser downloads are standard for Playwright-based scrapers.
Credentials
The project declares no required credentials or secret environment variables. SECURITY.md lists optional env vars (SOTA_CACHE_DIR, SOTA_LOG_LEVEL) only. There are no requests for unrelated cloud credentials or sensitive tokens. Note: the REST API is described as read-only but with wildcard CORS by default in docs — exposing the server without auth may be acceptable for public data but is a deployment decision you should make consciously.
Persistence & Privilege
The skill is not flagged always:true and does not request elevated system privileges. It does, however, instruct you how to persist data/automation: enabling GitHub Actions on a fork (server-side automation), adding a systemd timer or cron job, and writing to agent files (e.g., ~/.claude/CLAUDE.md, ~/.claude/skills/...). Those are appropriate for the intended use but represent persistent presence on your system and changes to agent configuration, so inspect the scripts before enabling them.
Assessment
This project appears coherent and fits its stated purpose (daily scrapers, exports, REST API, optional MCP). Before installing or enabling automation: 1) Review the scripts that will write to your home (update_sota_claude_md.py, any recommended copies to ~/.claude or ~/.cyrus) so you know what they change. 2) If you run scrapers, do so in an isolated environment (virtualenv, container, or VM) because Playwright will download browser binaries and scrapers make external network requests. 3) Don’t expose the REST API publicly without adding authentication or firewall rules; docs note wildcard CORS and no built-in auth by default. 4) If you enable GitHub Actions on a fork, review the workflow to understand what it commits; enabling workflows gives the repo the ability to push commits in that fork's context (standard GitHub behavior). 5) If you want to integrate as an MCP server, understand that adding the .mcp.json entry and running server.py will make a local service available to agents — only enable if you trust the code and network exposure. 6) No secrets are requested by the skill, which is good; still run tests and read init_db.py / scrapers/run_all.py to confirm they only access the public sources you expect. If any of these points are unclear, run the project locally in a sandbox first or ask for a targeted code review of specific scripts (e.g., update_sota_claude_md.py, server.py, scrapers/run_all.py).

Like a lobster shell, security has layers — review code before you run it.

agentsvk971k7r7psf69mxv037mbj8n5h80g8xwai-modelsvk971k7r7psf69mxv037mbj8n5h80g8xwclawvk971k7r7psf69mxv037mbj8n5h80g8xwlatestvk971k7r7psf69mxv037mbj8n5h80g8xwrankingsvk971k7r7psf69mxv037mbj8n5h80g8xwsotavk971k7r7psf69mxv037mbj8n5h80g8xwupdatesvk971k7r7psf69mxv037mbj8n5h80g8xw

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments