Install
openclaw skills install god-of-all-browsersClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
A 100x smarter browser automation CLI that mimics human behavior using a native stateful Chromium instance. It supports multi-tab management, bypasses bot detection, auto-closes popups, and preserves cookies permanently.
openclaw skills install god-of-all-browsersGod of all BrowsersA stateful, multi-tab Puppeteer skill designed to help AI agents automate heavily protected websites the exact same way a human does.
It solves three critical AI problems:
snapshot maps the DOM, assigns a [tag] ID to every visible button/input, and takes a screenshot. The AI just says "Click tag [15]."headless: false, custom user agents, removed webdriver footprints, and canvas spoofing.Important Setup: Ensure the Chromium path is correct (C:\Program Files\Google\Chrome\Application\chrome.exe for Win or /usr/bin/chromium for Linux) and puppeteer-core is installed.
Launch the browser in the background. It will use a persistent chrome_profile directory so you NEVER lose login sessions.
# Standard mode (Recommended for debugging)
node browser.js start
# Headless mode (Faster, silent background)
# Note: Automatically enabled if running in Termux.
node browser.js start --headless
This is your "eyes". Run this before any interaction to get the active window's current state and a list of clickable [tag] IDs.
# If navigating somewhere new:
node browser.js snapshot --url "https://www.google.com"
# If already on the page (refresh DOM):
node browser.js snapshot
Wait for this command to output the JSON array of tags. It will also automatically click away annoying Chatbot/Notification popups.
Use the tags captured during the snapshot.
# Click a button or link (e.g. tag [24])
node browser.js click --tag "[24]"
# Type into an input box (e.g. tag [5])
node browser.js type --tag "[5]" --text "MERN Stack Developer"
# Press a specific keyboard key (Default: Enter)
node browser.js press --key "Enter"
Extract text content from any element using tags or CSS selectors.
# Read visible text from a specific tag
node browser.js read --tag "[12]"
# Read content of a specific CSS selector (e.g. the main article)
node browser.js read --selector "article.main-content"
# Deep-expand hidden content (clicks Read More/Show All buttons automatically)
node browser.js expand
Many sites open clicked links in a new tab! If your click command opens a new tab, the CLI will automatically say:
⚠️ A NEW TAB WAS OPENED!! Automatically switched context to Tab [1].
You can manually manage tabs using:
# List all currently open tabs
node browser.js check-tabs
# Switch to a specific tab index (e.g. going back to the search page: tab 0)
node browser.js switch-tab --index 0
# Just check the very current URL you are viewing:
node browser.js check-url
Use this to filter elements by keywords instead of reading a massive snapshot. It can search live on the current page or in a previously saved JSON file.
# Search live for "Apply" or "Success" buttons
node browser.js find --query "apply,success"
# Search within a specific saved snapshot file (e.g., to verify output)
node browser.js find --file "snapshot.json" --query "applied,successfully"
Manually reload the current tab. Useful for status updates.
node browser.js refresh
Extract hidden page data like Title, Description, and Social Media tags.
node browser.js scrap-meta
Execute custom JavaScript logic directly in the browser context. Note: For security, the --force flag is required. Supports both inline code and script files.
# Execute inline code (Requires --force)
node browser.js eval --code "return { links: document.querySelectorAll('a').length }" --force
# Execute from a file (Requires --force)
node browser.js eval --file "custom_script.js" --force
Get the top 5 organic search results (Titles, Links, Snippets) in a single command. Extremely fast and agent-friendly.
node browser.js google --query "Mathanraj Murugesan"
Manage your login state and keep track of automation failures for self-correction.
# Save current cookies to session.json (persists across runs)
node browser.js save-session
# Check if the page requires login or if the user already logged in
node browser.js auth-status
# Log a failure and a lesson learned for AI self-correction
node browser.js log-learning --failed "Selector [12] was hidden" --fixed "Used [expand] first" --lessons "Always try expanding content before reading"
Clean up resources when the task is entirely finished.
node browser.js stop
start.snapshot --url "[TARGET]".auth-status if the page is restricted. Use save-session after manual/automated login.click or type on the specific [tag].snapshot will read from that tab.snapshot again WITHOUT a URL to read the new page/modal that loaded.check-tabs and switch-tab --index 0.log-learning to record the fix for future runs.eval with the --force flag.stop.When you need to get actual data (not just see the page), use the eval command with these patterns:
Google Search Results:
node browser.js eval --force --code "return Array.from(document.querySelectorAll('div.g')).slice(0,5).map(g => ({ title: g.querySelector('h3')?.innerText, link: g.querySelector('a')?.href }))"
LinkedIn Profile (Basic):
node browser.js eval --force --code "return { name: document.querySelector('.text-heading-xlarge')?.innerText, title: document.querySelector('.text-body-medium')?.innerText }"
General Link Scraper:
node browser.js eval --force --code "return Array.from(document.querySelectorAll('a')).map(a => ({ text: a.innerText, url: a.href })).filter(a => a.url.startsWith('http'))"
Follow this professional flow for complex, multi-stage automation tasks:
start to launch the persistent browser instance.snapshot --url "[TARGET]" to land on the page.auth-status to check if a login is required.save-session to persist the state.expand before deep scanning. This removes popups and reveals hidden content that might be missing from the DOM.snapshot (without URL) to get the latest [tag] list.click, type, or press.check-url to see if the page changed.⚠️ A NEW TAB WAS OPENED, run check-tabs.[1]) and run switch-tab --index 1.read --tag "[#]" for simple text.eval for complex data structures (arrays of objects, etc.).log-learning --failed "..." --fixed "..." to document the solution for the AI's internal memory.stop only when the entire job (across all domains) is finished.