SOTA Tracker (Claw)
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
An agent that consumes this file could be steered toward delegating long-running tasks to an unrelated Cyrus workflow instead of only answering SOTA model questions.
This agent-facing file introduces an unrelated automation/delegation workflow that can run background work through a separate system, which does not fit the SOTA tracker purpose.
Delegate to Cyrus - execution happens automatically ... Overnight execution (background-safe)
Remove or isolate the Cyrus instructions from the skill package, or treat them as optional developer documentation that is not loaded into user-agent context.
Future agent behavior and recommendations could be persistently influenced by automatically updated content, including any bad or poisoned data from upstream sources.
The documented workflow writes externally sourced model-ranking content into a persistent agent context file, and the same README recommends daily automation via systemd or cron.
This embeds a compact SOTA summary directly in your `~/.claude/CLAUDE.md` file.
Use manual updates or review diffs before enabling timers; back up agent instruction files and ensure updates are clearly delimited and reversible.
Other devices may be able to access the API if firewall settings allow it.
The REST API example binds to all interfaces, which is purpose-aligned for serving data but can expose the service to the local network.
uvicorn rest_api:app --host 0.0.0.0 --port 8000
Bind to `127.0.0.1` unless network access is intentionally needed, and add authentication if exposing it beyond your machine.
Running the full scraper executes local code and contacts external model-ranking sites.
The scraper workflow requires installing packages, installing a browser runtime, and running local Python code. This is expected for the stated scraping purpose, but users should notice it.
pip install -r requirements.txt pip install playwright playwright install chromium python scrapers/run_all.py --export
Run these commands only in a trusted checkout, preferably in a virtual environment, after reviewing dependencies and scraper behavior.
