Apify Runner

v1.0.0

Run any Apify Actor to scrape web data (Instagram, TikTok, Reddit, Twitter, etc). Handles Actor discovery, quality filtering, probe testing, batched executio...

0· 273·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the included scripts and instructions: searching the Apify Store, selecting Actors, running probe/full runs, and collecting datasets. However, registry metadata lists no required env vars while SKILL.md and the scripts clearly require an APIFY_TOKEN or config.json — a metadata mismatch that should be corrected.
Instruction Scope
SKILL.md's runtime instructions stay within the stated task: discover Actors, build run_input from Actor .md, run probe tests, batch runs, and save results. It directs the agent to run the included Python scripts and to fetch Actor docs from apify.com — these are expected for this skill. It does not instruct reading unrelated system files or exfiltrating data to external hosts outside api.apify.com and apify.com.
Install Mechanism
No install spec; the skill is instruction + bundled Python scripts. That is lower risk than a remote download. The scripts require the 'requests' Python package but no installer is provided; failure to have requests will break the skill. Nothing downloads or executes code from untrusted URLs.
!
Credentials
The skill legitimately needs an Apify API token (APIFY_TOKEN) or a config.json with tokens to start Actor runs; this is appropriate for its purpose. The problem is the registry metadata lists 'Required env vars: none' whereas SKILL.md and both scripts require a token — an incoherence that could mislead users into installing without providing credentials. Also note that an Apify token grants the ability to start runs which may incur billing or access data — users should use least-privilege tokens and be aware of billing implications.
Persistence & Privilege
The skill does not request always:true, does not modify other skills, and has no special persistence or elevated privileges. It runs transient Python scripts and writes results only to the specified output path.
What to consider before installing
This skill appears to implement the described Apify-run workflow, but before installing: 1) Note the SKILL.md and scripts require an APIFY_TOKEN or config.json (registry metadata omits this) — provide a token only if you trust the skill. 2) The skill runs bundled Python scripts that will make network requests to api.apify.com and apify.com and may start runs that incur billing; prefer a limited-scope or throwaway token and verify billing/permissions in your Apify account. 3) Ensure your environment has Python 3 and the 'requests' package, or the scripts will fail. 4) If you need stronger assurance, review the included scripts (they are small and readable) or run them in an isolated environment, and ask the publisher to correct the metadata so the required APIFY_TOKEN is declared explicitly.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ce9whyz487wfhz5cjf3t2v182c8tt
273downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Apify Skill

Run any Apify Actor through a standardized workflow: search → validate → execute → collect results.

Prerequisites

  • APIFY_TOKEN env var, or a config.json with tokens (copy config.json.example)
  • Python 3 with requests installed

Workflow

Step 1: Parse User Intent

Extract from the user's request:

  • Platform/target (Instagram, TikTok, Reddit, etc.)
  • What to scrape (posts, profiles, hashtags, comments, etc.)
  • Targets (URLs, usernames, keywords)
  • Quantity/filters (how many, time range, min likes, etc.)

Step 2: Select Token

If user specifies a token name or the task maps to a specific account, use that. Otherwise use default.

Token can be provided via:

  1. --token flag (highest priority)
  2. config.json tokens map (by --token-name)
  3. APIFY_TOKEN env var (fallback)

Step 3: Search & Select Actor

Run the search script:

python3 scripts/search_actor.py "instagram scraper" --top 3

Output: ranked candidates with score, success rate, rating, pricing model.

Quality filters (built into script):

  • notice = NONE (not deprecated)
  • 30-day success rate ≥ 95%
  • 30-day runs ≥ 1,000
  • User rating ≥ 4.0

Pick the top-ranked candidate. If user has a preference or prior experience with a specific Actor, skip search.

Step 4: Get Actor Schema & Build run_input

Fetch the Actor's documentation:

web_fetch https://apify.com/{actor_id}.md

Read the input schema section. Construct run_input JSON based on:

  • The Actor's required/optional fields
  • The user's targets and filters
  • Sensible defaults from the documentation

Do NOT ask the user to write JSON. Build it from their natural language request.

Step 5: Probe Test (Top 1 → Top 2 → Top 3 fallback)

Test with minimal input before committing to full run:

python3 scripts/apify_runner.py {actor_id} \
  --input '{...}' \
  --token {token} \
  --probe-only \
  --list-key {key}

The probe automatically uses the first 2 items from the list field.

Checks:

  • Run starts successfully (no permission/billing errors)
  • Run completes (no timeout/crash)
  • Returns non-empty data

If probe fails → try next candidate Actor. If all 3 fail → report to user with Actor URLs for manual activation.

Step 6: Full Execution

python3 scripts/apify_runner.py {actor_id} \
  --input '{...}' \
  --token {token} \
  --output /path/to/results.json \
  --list-key {key} \
  --batch-size 50 \
  --probe

Key flags:

FlagPurposeDefault
--list-keyField in run_input containing the list to batchNone (no batching)
--batch-sizeItems per batch50
--timeoutPer-batch timeout (seconds)600
--probeRun probe before full executionOff
--outputSave results to JSON fileStdout
--configPath to config.json for token lookupNone
--token-nameWhich token to use from config"default"

Batching rules:

  • ≤ batch-size items → single run
  • > batch-size items → auto-split, 3s pause between batches
  • Each batch has independent timeout (default 10 min)

Step 7: Return Results

  • Report total items collected
  • Save raw JSON to specified output path
  • Summarize key stats (items count, batches, any failures)
  • Let the caller handle filtering/reporting/delivery

Common Actor Patterns

PlatformTypical Actorlist_keyExample input
Instagramapify/instagram-scraperdirectUrls{"directUrls": ["https://instagram.com/user/"], "resultsType": "posts", "resultsLimit": 3}
TikTokclockworks/tiktok-scraperhashtags{"hashtags": ["cooking"], "resultsPerPage": 50}
Reddittrudax/reddit-scraper-litestartUrls{"startUrls": [{"url": "https://reddit.com/r/cooking/top/?t=month"}], "maxItems": 30}
Twitterapidojo/tweet-scraperCheck .md for current schema

These are starting points. Always verify with the Actor's .md page for current schema.

Comments

Loading comments...