Install
openclaw skills install aerobase-browserBrowser-based flight search and airline check-in automation
openclaw skills install aerobase-browserUSE BROWSER ONLY WHEN:
browser snapshot — get ARIA tree with [ref=eN] element referencesbrowser type [ref=eN] "value" — type into an input fieldbrowser click [ref=eN] — click an elementbrowser screenshot — capture current page statehttps://www.google.com/travel/flightsbrowser snapshot → ARIA treebrowser snapshot → extract airlines, prices, durations, stopshttps://www.kayak.comhttps://www.skyscanner.comTrack all browser searches in workspace file ~/browser-searches.json:
{
"date": "2026-02-22",
"count": 3,
"searches": [
{ "site": "google-flights", "query": "JFK-NRT 2026-03-15", "timestamp": "2026-02-22T10:30:00Z" }
],
"blockedUntil": null
}
Before each browser search:
~/browser-searches.json (create if missing)date differs from today, reset count to 0 and clear searchesblockedUntil is set and in the future, refuse — tell user blocked by sitecount >= 10, refuse — tell user daily browser search limit reachedcount and append to searchesblockedUntil to 24 hours from nowDIRECT (no proxy): Google Flights, Kayak, Booking.com, Google Hotels, Lufthansa SCRAPLING (stealth service, no proxy needed): Delta, British Airways, SecretFlying, seats.aero, Southwest, Hilton, Hyatt, TripAdvisor, TheFlightDeal, Going, SeatGuru, Google Travel (flights + hotels) PROXY (residential): United, American Airlines, Air Canada, KLM, TravelPirates SKIP BROWSER (use API):
When browser automation is blocked by anti-bot systems (Akamai, Cloudflare, Datadome, etc.),
use the stealth scrapling service configured via SCRAPLING_URL environment variable.
This service bypasses detection WITHOUT needing residential proxies.
Reference: Scrapling Documentation
When to use Scrapling:
How to invoke:
Fetch a page (returns JSON with status, title, HTML, challenge detection):
web_fetch {SCRAPLING_URL}/fetch?url=https://www.delta.com&json=1
Run JavaScript on a page:
POST {SCRAPLING_URL}/evaluate
Body: {"url": "https://seats.aero", "script": "document.title"}
Check service health:
web_fetch {SCRAPLING_URL}/health
Response fields:
status: HTTP status code (200 = success)title: Page titlechallenge: "pass" | "captcha" | "blocked" | "challenge"cached: true if served from 5-min cachehtml: Page HTML (truncated to 50KB in JSON mode)html_length: Full HTML lengthFallback chain:
Important: Scrapling responses are cached for 5 minutes. For time-sensitive
data (live prices, seat maps), append &nocache=1 or wait for cache expiry.
Pre-built search + Python-side parsing. Returns structured JSON — no browser snapshot/type/click needed. Results are parsed server-side via Scrapling's Adaptor engine (CSS selectors, find_similar for self-healing).
Google Flights:
POST {SCRAPLING_URL}/search
{"site":"google-flights","origin":"LAX","destination":"NRT","departure":"2026-03-15","return":"2026-03-22"}
Returns: {"results": [{"airline":"...","price":"...","duration":"...","stops":"..."}], "count": N}
Kayak:
POST {SCRAPLING_URL}/search
{"site":"kayak","origin":"LAX","destination":"NRT","departure":"2026-03-15","return":"2026-03-22"}
Booking.com hotels:
POST {SCRAPLING_URL}/search
{"site":"booking","destination":"Tokyo","checkin":"2026-03-15","checkout":"2026-03-22","guests":2}
Returns: {"results": [{"name":"...","price":"...","rating":"...","location":"..."}], "count": N}
Deal sites:
POST {SCRAPLING_URL}/search
{"site":"secretflying"}
POST {SCRAPLING_URL}/search
{"site":"theflightdeal"}
Returns: {"results": [{"title":"...","url":"..."}], "count": N}
Check challenge field — if not "pass", results may be incomplete (consent wall, bot block).
For flows needing form fill, click, screenshot (check-in, login, registration):
POST {SCRAPLING_URL}/interact
{
"url": "https://www.southwest.com/air/check-in/",
"steps": [
{"action": "consent"},
{"action": "fill", "selector": "#confirmationNumber", "value": "ABC123"},
{"action": "fill", "selector": "#firstName", "value": "John"},
{"action": "fill", "selector": "#lastName", "value": "Doe"},
{"action": "click", "selector": "button#form-mixin--submit-button"},
{"action": "wait", "ms": 5000},
{"action": "screenshot"},
{"action": "extract", "css": "h1::text"}
]
}
Available actions:
consent — auto-dismiss cookie consent wallsfill — fill input by CSS selector (instant, like paste)type — type with per-key delay (more human-like, use for sensitive fields)click — click element by CSS selectorwait — wait N millisecondswait_for — wait for selector to appear (with timeout)screenshot — capture current page (returned as base64 in screenshots array)extract — parse page with CSS selector (results in extracted array)select — select dropdown optionweb_fetch {SCRAPLING_URL}/fetch?url=https://www.delta.com&json=1&screenshot=1
web_fetch {SCRAPLING_URL}/fetch?url=https://www.secretflying.com&json=1&extract=css&selector=article
/search in parallel for comparison dataFor any task where we have an API:
Scrapling service handles consent dismissal automatically via page_action.
For native browser, patterns to try in order:
If you see any of these, you're being blocked:
Response:
Server is in Helsinki, Finland (EU). This means: