Install
openclaw skills install youtube-apify-transcriptFetch YouTube transcripts via APIFY API. Works from cloud IPs (Hetzner, AWS, etc.) by bypassing YouTube's bot detection. Features local caching (FREE repeat requests!) and batch mode. Requires APIFY_API_TOKEN and Python requests. No credit card required for the free tier.
openclaw skills install youtube-apify-transcriptFetch YouTube transcripts via APIFY API (works from cloud IPs, bypasses YouTube bot detection).
YouTube blocks transcript requests from cloud IPs (AWS, GCP, etc.). APIFY runs the request through residential proxies, bypassing bot detection reliably.
# Add to ~/.bashrc or ~/.zshrc
export APIFY_API_TOKEN="apify_api_YOUR_TOKEN_HERE"
# Install Python dependency
pip install requests
# Or use .env file (never commit this!)
echo 'APIFY_API_TOKEN=apify_api_YOUR_TOKEN_HERE' >> .env
# Get transcript as text (uses cache by default)
python3 scripts/fetch_transcript.py "https://www.youtube.com/watch?v=VIDEO_ID"
# Short URL also works
python3 scripts/fetch_transcript.py "https://youtu.be/VIDEO_ID"
# Output to file
python3 scripts/fetch_transcript.py "URL" --output transcript.txt
# JSON format (includes timestamps)
python3 scripts/fetch_transcript.py "URL" --json
# Both: JSON to file
python3 scripts/fetch_transcript.py "URL" --json --output transcript.json
# Specify language preference
python3 scripts/fetch_transcript.py "URL" --lang de
Transcripts are cached locally by default. Repeat requests for the same video cost $0.
# First request: fetches from APIFY ($0.007)
python3 scripts/fetch_transcript.py "URL"
# Second request: uses cache (FREE!)
python3 scripts/fetch_transcript.py "URL"
# Output: [cached] Transcript for: VIDEO_ID
# Bypass cache (force fresh fetch)
python3 scripts/fetch_transcript.py "URL" --no-cache
# View cache stats
python3 scripts/fetch_transcript.py --cache-stats
# Clear all cached transcripts
python3 scripts/fetch_transcript.py --clear-cache
Cache location: .cache/ in skill directory (override with YT_TRANSCRIPT_CACHE_DIR env var)
Process multiple videos at once:
# Create a file with URLs (one per line)
cat > urls.txt << EOF
https://youtube.com/watch?v=VIDEO1
https://youtu.be/VIDEO2
https://youtube.com/watch?v=VIDEO3
EOF
# Process all URLs
python3 scripts/fetch_transcript.py --batch urls.txt
# Output:
# [1/3] Fetching VIDEO1...
# [2/3] [cached] VIDEO2
# [3/3] Fetching VIDEO3...
# Batch complete: 2 fetched, 1 cached, 0 failed
# [Cost: ~$0.014 for 2 API call(s)]
# Batch with JSON output to file
python3 scripts/fetch_transcript.py --batch urls.txt --json --output all_transcripts.json
Text (default):
Hello and welcome to this video.
Today we're going to talk about...
JSON (--json):
{
"video_id": "dQw4w9WgXcQ",
"title": "Video Title",
"transcript": [
{"start": 0.0, "duration": 2.5, "text": "Hello and welcome"},
{"start": 2.5, "duration": 3.0, "text": "to this video"}
],
"full_text": "Hello and welcome to this video..."
}
When the user asks to summarize a YouTube video, first fetch the transcript using the script, then summarize the transcript text directly using your own model capabilities. Do NOT use --summarize flag.
The script handles common errors:
metadata:
clawdbot:
emoji: "📹"
requires:
env: ["APIFY_API_TOKEN"]
bins: ["python3"]
python:
packages: ["requests"]