Nutrition Provider R2
ReviewAudited by ClawScan on May 1, 2026.
Overview
The skill coherently crawls public nutrition records and uploads them to Cloudflare R2, with disclosed but noteworthy R2 credential use, cloud writes, and external/runtime dependencies.
This appears purpose-aligned for ingesting public nutrition-provider records into R2. Before installing, make sure you are comfortable giving the agent R2 write access, use a dedicated bucket or prefix, review the separate scrapling-official crawler skill, and run with conservative pagination and `--skip-existing` until the output layout is verified.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If installed and used with real R2 credentials, the agent can upload data into the configured Cloudflare R2 bucket.
The skill requires Cloudflare R2 credentials that can authorize bucket access; this is expected for uploading records, but users should notice it because the registry metadata declares no required env vars.
Confirm the R2 credentials are present: - `R2_ACCOUNT_ID` - `R2_ACCESS_KEY_ID` - `R2_SECRET_ACCESS_KEY` - `R2_BUCKET`
Use a dedicated bucket or narrowly scoped R2 access key where possible, set only the needed environment variables for the run, and remove or rotate credentials afterward.
A mistaken run could upload unwanted objects, consume storage, or overwrite dataset objects in the selected bucket.
The helper performs R2 object writes. This matches the skill purpose, but it can create or replace objects in the configured bucket if invoked with the wrong key or prefix.
client.put_object(Bucket=bucket, Key=key, Body=payload, ContentType=content_type)
Use `--skip-existing`, a dedicated prefix/run ID, and a test or isolated bucket until the crawl and key layout are verified.
The safety and behavior of the crawling phase depend on the separately installed scrapling-official skill.
The actual crawl execution, endpoint discovery, and fetch escalation are delegated to another skill. That dependency is disclosed and purpose-aligned, but it is outside this artifact set.
This skill depends on `scrapling-official` for crawling.
Review and configure scrapling-official separately before running this skill, especially its fetch escalation and site-access behavior.
Future runs may install newer boto3 or botocore versions than originally tested.
The uv script declares runtime Python dependencies with lower-bound version ranges rather than pinned exact versions. This is common and purpose-aligned for S3/R2 uploads, but it means dependency resolution can change over time.
dependencies = [ # "boto3>=1.34.0", # "botocore>=1.34.0", # ]
Run in a controlled environment and consider pinning or locking dependency versions for repeatable production use.
