Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
SOTA AI Model Tracker
v1.0.0Provides daily updated authoritative data and APIs tracking state-of-the-art AI models across categories from LMArena, Artificial Analysis, and HuggingFace.
⭐ 0· 1.7k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/README/SKILL.md describe a SOTA model tracker; the code files (scrapers, fetchers, init_db, rest_api, server) match that purpose. The metadata declares no credentials or binaries, which is consistent with a self-hosted Python project that runs scrapers and serves a local API.
Instruction Scope
SKILL.md instructs the user to run scrapers (Playwright + Chromium), pip install requirements, write/overwrite ~/.claude/CLAUDE.md for static embedding, and optionally enable a systemd user timer — all within the stated purpose. These instructions do modify user config and run network scrapers; they're expected for this tool but users should be aware these are persistent, local file and system-level changes.
Install Mechanism
There is no install spec in the skill metadata (instruction-only), but the repo includes requirements.txt, pyproject.toml and scripts that require installing Python deps and Playwright/Chromium. That mismatch is explainable (manual install expected) but worth noting: the skill will require downloads (pip packages and a browser) if you follow SKILL.md.
Credentials
The project does not request secrets or unrelated environment variables. SECURITY.md documents optional env vars (SOTA_CACHE_DIR, SOTA_LOG_LEVEL) but none are required. The scrapers use publicly accessible sources; no credentials are stored in the repo.
Persistence & Privilege
always:false and normal autonomous invocation are set. The runtime instructions recommend creating/updating ~/.claude/CLAUDE.md and enabling a systemd --user timer; those are legitimate user-level persistence actions but they change local configuration and create a periodic task. If enabled, the optional MCP server would run a local service that may be reachable depending on host networking and .mcp.json configuration.
Scan Findings in Context
[pre-scan-injection-signals] expected: Pre-scan reported no injection signals. Given this repo contains many scrapers and network calls, the absence of flagged patterns is plausible; manual review of network calls and scrapers is still recommended before running.
Assessment
What to consider before installing/running:
- If you only need up-to-date SOTA data, prefer downloading the published data/sota_export.json or CSV (SKILL.md Option 1) instead of running scrapers or enabling servers.
- Running scrapers requires pip install -r requirements.txt and playwright install chromium (downloads a browser binary). Scrapers will make network requests to LMArena, ArtificialAnalysis, HuggingFace, Civitai, etc.; review scraper code if you have policy concerns about any source.
- The project will modify user config if you follow the embedding instructions (writes ~/.claude/CLAUDE.md) and may add a user-level systemd timer; these are normal but persistent changes — back up the file first.
- If you run the REST API or MCP server locally, consider firewalling or adding authentication (SKILL.md and SECURITY.md show how) so the service isn't exposed publicly by accident.
- The repo includes GitHub Actions automation in its workflow docs; if you fork and enable Actions, the workflow will auto-commit updated data to your fork. Be mindful of what you allow a workflow to push from your account.
- To reduce risk: run inside a dedicated virtualenv, inspect init_db.py and scrapers for unexpected behavior, use the static JSON export where possible, and avoid enabling MCP or systemd timers until you’re comfortable with the code.
Overall this appears coherent and consistent with its stated purpose; the main risks are operational (network access, local file writes, running a local server) rather than deceptive behavior.Like a lobster shell, security has layers — review code before you run it.
ai-modelsvk97a766f6s21fza5wcvfyqjtkx80h37nanthropicvk97a766f6s21fza5wcvfyqjtkx80h37nhuggingfacevk97a766f6s21fza5wcvfyqjtkx80h37nlatestvk97a766f6s21fza5wcvfyqjtkx80h37nlmsysvk97a766f6s21fza5wcvfyqjtkx80h37nmcpvk97a766f6s21fza5wcvfyqjtkx80h37nrankingsvk97a766f6s21fza5wcvfyqjtkx80h37nsotavk97a766f6s21fza5wcvfyqjtkx80h37n
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
