Browser Cash

Spin up unblocked browser sessions via Browser.cash for web automation. Sessions bypass anti-bot protections (Cloudflare, DataDome, etc.) making them ideal for scraping and automation.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
3 · 3.3k · 14 current installs · 14 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill name/description (remote, anti-bot bypassing browser sessions) aligns with the instructions for creating sessions and connecting to a CDP URL. However, the skill metadata declares no required credentials while the runtime instructions require an API key stored in Clawdbot config at skills.entries.browser-cash.apiKey—this is an omission in declared requirements. Requiring curl and jq is coherent with the provided curl/jq examples.
Instruction Scope
SKILL.md instructs the agent to: read/write Clawdbot config entries, run npm install in ~/clawd (installing playwright and puppeteer-core), execute Node code that connects to remote CDP WebSocket endpoints, and interact with Browser.cash APIs. These actions are within the scope of browser automation, but they include local package installs, execution of arbitrary Node snippets, and opening a bi-directional WebSocket to an externally hosted browser which can load arbitrary pages—each of which has operational and data-exfiltration implications the user should consider.
Install Mechanism
There is no formal install spec in the registry (instruction-only), which is lower registry risk, but the runtime instructions tell the agent to run npm install to fetch Playwright/puppeteer-core into ~/clawd/node_modules. That will download third‑party code and browser binaries from upstream registries/hosts at runtime (potentially large downloads). Because installation is performed via shell commands in SKILL.md rather than an audited install spec, this increases the surface area.
!
Credentials
The skill does not declare required environment variables or a primary credential in metadata, yet the instructions require an API key stored in Clawdbot config (skills.entries.browser-cash.apiKey) and reference it via BROWSER_CASH_KEY. This mismatch is a material omission: users must supply a secret (API key) but the skill metadata does not make that explicit. No other unrelated credentials are requested.
Persistence & Privilege
always is false and the skill is user-invocable only (normal). The skill recommends writing the API key into the agent's Clawdbot config (a persistent change to agent config). That is expected for a credential-based integration, but you should verify how the Clawdbot config is stored and who/what can read it.
What to consider before installing
Before installing or using this skill: (1) confirm you trust the Browser.cash service (it will receive your API key and run browser sessions), (2) be aware the SKILL.md expects you to store the API key in Clawdbot config (the registry metadata fails to declare this requirement), (3) the instructions install Playwright/puppeteer-core into ~/clawd and will download browser binaries and code from npm/Playwright hosts—review and approve those downloads, (4) the skill executes Node snippets that connect to remote CDP WebSocket endpoints; those remote browsers can load arbitrary pages and could be used to access or exfiltrate data you navigate to, so avoid using sensitive accounts or internal-only services without proper controls, (5) check company/legal policies about bypassing anti-bot protections and scraping, and (6) if you proceed, inspect and run the curl/npm/node commands manually in a controlled environment first rather than allowing full autonomous execution.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
automationvk97ancwe5hrbt0fn0e77vnda9s800kb8browservk97ancwe5hrbt0fn0e77vnda9s800kb8browser, scraping, automation, anti-detect, cdp, playwright, puppeteer, web, unblockedvk97ancwe5hrbt0fn0e77vnda9s800kb8cdpvk97ancwe5hrbt0fn0e77vnda9s800kb8latestvk97ancwe5hrbt0fn0e77vnda9s800kb8playwrightvk97ancwe5hrbt0fn0e77vnda9s800kb8scrapingvk97ancwe5hrbt0fn0e77vnda9s800kb8

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🌐 Clawdis
Binscurl, jq

SKILL.md

browser-cash

Spin up unblocked browser sessions via Browser.cash for web automation. These sessions bypass common anti-bot protections (Cloudflare, DataDome, etc.), making them ideal for scraping, testing, and automation tasks that would otherwise get blocked.

When to use: Any browser automation task—scraping, form filling, testing, screenshots. Browser.cash sessions appear as real browsers and handle bot detection automatically.

Setup

API Key is stored in clawdbot config at skills.entries.browser-cash.apiKey.

If not configured, prompt the user:

Get your API key from https://dash.browser.cash and run:

clawdbot config set skills.entries.browser-cash.apiKey "your_key_here"

Reading the key:

BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)

Before first use, check and install Playwright if needed:

if [ ! -d ~/clawd/node_modules/playwright ]; then
  cd ~/clawd && npm install playwright puppeteer-core
fi

API Basics

curl -X POST "https://api.browser.cash/v1/..." \
  -H "Authorization: Bearer $BROWSER_CASH_KEY" \
  -H "Content-Type: application/json"

Create a Browser Session

Basic session:

curl -X POST "https://api.browser.cash/v1/browser/session" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY" \
  -H "Content-Type: application/json" \
  -d '{}'

Response:

{
  "sessionId": "abc123...",
  "status": "active",
  "servedBy": "node-id",
  "createdAt": "2025-01-20T01:51:25.000Z",
  "stoppedAt": null,
  "cdpUrl": "wss://gcp-usc1-1.browser.cash/v1/consumer/abc123.../devtools/browser/uuid"
}

With options:

curl -X POST "https://api.browser.cash/v1/browser/session" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "country": "US",
    "windowSize": "1920x1080",
    "profile": {
      "name": "my-profile",
      "persist": true
    }
  }'

Session Options

OptionTypeDescription
countrystring2-letter ISO code (e.g., "US", "DE", "GB")
windowSizestringBrowser dimensions, e.g., "1920x1080"
proxyUrlstringSOCKS5 proxy URL (optional)
profile.namestringNamed browser profile for session persistence
profile.persistbooleanSave cookies/storage after session ends

Using Browser.cash with Clawdbot

Browser.cash returns a WebSocket CDP URL (wss://...). Use one of these approaches:

Option 1: Direct CDP via exec (Recommended)

Important: Before running Playwright/Puppeteer scripts, ensure dependencies are installed:

[ -d ~/clawd/node_modules/playwright ] || (cd ~/clawd && npm install playwright puppeteer-core)

Use Playwright or Puppeteer in an exec block to connect directly to the CDP URL:

# 1. Create session
BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)
SESSION=$(curl -s -X POST "https://api.browser.cash/v1/browser/session" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY" \
  -H "Content-Type: application/json" \
  -d '{"country": "US", "windowSize": "1920x1080"}')

SESSION_ID=$(echo $SESSION | jq -r '.sessionId')
CDP_URL=$(echo $SESSION | jq -r '.cdpUrl')

# 2. Use via Node.js exec (Playwright)
node -e "
const { chromium } = require('playwright');
(async () => {
  const browser = await chromium.connectOverCDP('$CDP_URL');
  const context = browser.contexts()[0];
  const page = context.pages()[0] || await context.newPage();
  await page.goto('https://example.com');
  console.log('Title:', await page.title());
  await browser.close();
})();
"

# 3. Stop session when done
curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=$SESSION_ID" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Option 2: Curl-based automation

For simple tasks, use curl to interact with pages via CDP commands:

# Navigate and extract content using the CDP URL
# (See CDP protocol docs for available methods)

Note on Clawdbot browser tool

Clawdbot's native browser tool expects HTTP control server URLs, not raw WebSocket CDP. The gateway config.patch approach works when Clawdbot's browser control server proxies the connection. For direct Browser.cash CDP, use the exec approach above.

Get Session Status

curl "https://api.browser.cash/v1/browser/session?sessionId=YOUR_SESSION_ID" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Statuses: starting, active, completed, error

Stop a Session

curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=YOUR_SESSION_ID" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

List Sessions

curl "https://api.browser.cash/v1/browser/sessions?page=1&pageSize=20" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Browser Profiles

Profiles persist cookies, localStorage, and session data across sessions—useful for staying logged in or maintaining state.

List profiles:

curl "https://api.browser.cash/v1/browser/profiles" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Delete profile:

curl -X DELETE "https://api.browser.cash/v1/browser/profile?profileName=my-profile" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Connecting via CDP

The cdpUrl is a WebSocket endpoint for Chrome DevTools Protocol. Use it with any CDP-compatible library.

Playwright:

const { chromium } = require('playwright');
const browser = await chromium.connectOverCDP(cdpUrl);
const context = browser.contexts()[0];
const page = context.pages()[0] || await context.newPage();
await page.goto('https://example.com');

Puppeteer:

const puppeteer = require('puppeteer-core');
const browser = await puppeteer.connect({ browserWSEndpoint: cdpUrl });
const pages = await browser.pages();
const page = pages[0] || await browser.newPage();
await page.goto('https://example.com');

Full Workflow Example

# 0. Ensure Playwright is installed
[ -d ~/clawd/node_modules/playwright ] || (cd ~/clawd && npm install playwright puppeteer-core)

# 1. Create session
BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)
SESSION=$(curl -s -X POST "https://api.browser.cash/v1/browser/session" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY" \
  -H "Content-Type: application/json" \
  -d '{"country": "US", "windowSize": "1920x1080"}')

SESSION_ID=$(echo $SESSION | jq -r '.sessionId')
CDP_URL=$(echo $SESSION | jq -r '.cdpUrl')

# 2. Connect with Playwright/Puppeteer using $CDP_URL...

# 3. Stop session when done
curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=$SESSION_ID" \
  -H "Authorization: Bearer $BROWSER_CASH_KEY"

Scraping Tips

When extracting data from pages with lazy-loading or infinite scroll:

// Scroll to load all products
async function scrollToBottom(page) {
  let previousHeight = 0;
  while (true) {
    const currentHeight = await page.evaluate(() => document.body.scrollHeight);
    if (currentHeight === previousHeight) break;
    previousHeight = currentHeight;
    await page.evaluate(() => window.scrollTo(0, document.body.scrollHeight));
    await page.waitForTimeout(1500); // Wait for content to load
  }
}

// Wait for specific elements
await page.waitForSelector('.product-card', { timeout: 10000 });

// Handle "Load More" buttons
const loadMore = await page.$('button.load-more');
if (loadMore) {
  await loadMore.click();
  await page.waitForTimeout(2000);
}

Common patterns:

  • Always scroll to trigger lazy-loaded content
  • Wait for network idle: await page.waitForLoadState('networkidle')
  • Use page.waitForSelector() before extracting elements
  • Add delays between actions to avoid rate limiting

Why Browser.cash for Automation

  • Unblocked: Sessions bypass Cloudflare, DataDome, PerimeterX, and other bot protections
  • Real browser fingerprint: Appears as a genuine Chrome browser, not headless
  • CDP native: Direct WebSocket connection for Playwright, Puppeteer, or raw CDP
  • Geographic targeting: Spin up sessions in specific countries
  • Persistent profiles: Maintain login state across sessions

Notes

  • Sessions auto-terminate after extended inactivity
  • Always stop sessions when done to avoid unnecessary usage
  • Use profiles when you need to maintain logged-in state
  • SOCKS5 is the only supported proxy type
  • Clawdbot runs scripts from ~/clawd/ - install npm dependencies there
  • For full page scraping, always scroll to trigger lazy-loaded content

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…