Browser Fu

v1.0.2

Fixes browser automation failures. Snapshot-first workflow + API discovery behind any website UI. Use when: 'browser not working', 'can't click', 'flaky UI',...

1· 184·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whooshinglander/browser-fu.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Browser Fu" (whooshinglander/browser-fu) from ClawHub.
Skill page: https://clawhub.ai/whooshinglander/browser-fu
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install browser-fu

ClawHub CLI

Package manager switcher

npx clawhub@latest install browser-fu
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the SKILL.md: the skill is an instruction-only guide for snapshot-first browser automation and API discovery. It requests no binaries, env vars, or installs, which is consistent for an authoring/instructional skill of this purpose.
Instruction Scope
Instructions stay within browser automation and API-discovery scope (snapshot→act cycles, network inspection, using web_fetch or curl). The doc explicitly recommends using cookies from a browser session for authenticated endpoints and demonstrates a curl example with a Cookie header; this is operationally necessary for some tasks but introduces a data-handling risk (exposing session tokens) if the agent or user mishandles them. The skill also mentions executing curl (shell/network calls), which is expected for API discovery but should be constrained to the target domain and explicit user approval when auth is required.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest-risk install footprint. Nothing is downloaded or written to disk by the skill itself.
Credentials
No environment variables, credentials, or config paths are required. The few references to cookies/sessions and a 'profile="user"' are relevant to browser automation and are proportionate to the stated purpose.
Persistence & Privilege
always:false (not force-included) and normal model invocation settings. The skill does not request persistent system presence or modify other skills/configs.
Assessment
This skill is a how-to for more reliable browser automation and API discovery and appears internally consistent. Before using it: (1) avoid copying or exposing real session cookies, API keys, or passwords into curl commands or logs — authenticate only when necessary and prefer short-lived credentials; (2) confirm your agent/environment enforces the skill's safeguard (that it won't output or persist cookies/tokens); (3) when an API requires authentication, prefer using sanctioned credentials stored safely by the platform rather than pasting cookie headers manually; (4) test flows on public/non-sensitive pages first to verify behavior; and (5) do not use it to bypass CAPTCHAs or to automate payments or irreversible actions without explicit consent. If you want a higher-assurance review, provide platform-specific details about how your agent exposes browser session cookies and whether curl/exec calls are sandboxed so I can re-evaluate risk level.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dsns8wr6qvmmf9thfh7xty983gkye
184downloads
1stars
3versions
Updated 1mo ago
v1.0.2
MIT-0

Browser Fu 🥊

Stop fighting the DOM. Read it first, find the API behind it, skip the UI entirely when possible.

The Rule

Never blind-click. Always snapshot first.

1. browser snapshot  →  read the page, get element refs
2. browser act       →  use refs from snapshot (e.g. ref="e12")
3. browser snapshot  →  verify what changed

If the snapshot doesn't show what you need, the element isn't in the DOM. Don't guess. Don't retry the same approach.

Decision Tree

On any browser task, follow this order:

  1. Can I skip the browser entirely? Check if a CLI tool, API, or web_fetch handles it. If yes, don't open the browser.
  2. Can I find the underlying API? See references/api-discovery.md. Most SPAs make fetch/XHR calls you can replicate directly. This is 10x faster and more reliable than UI automation.
  3. Can I do it with snapshot + act? Snapshot, find the ref, act on it. One action per snapshot cycle.
  4. Does the page need time to load? Use loadState: "networkidle" or a brief wait before snapshotting. SPAs often render asynchronously.
  5. Still not working? The site likely has anti-bot protection. Report it, don't retry blindly.

Common Failures and Fixes

SymptomWrong approachRight approach
"Element not found"Click by text/selector guessSnapshot first, use exact ref
"DOM not exposed"Give upSnapshot with refs="aria", or check network tab for API
Blank/empty pageRetry same URLloadState: "networkidle", then snapshot. If still blank, JS-heavy SPA, try web_fetch or find API
Clicking does nothingClick again harderSnapshot after click to check state. Maybe it DID work but page re-rendered
Login wallTry to automate loginUse profile="user" for existing session cookies
Infinite scrollScroll and prayFind the pagination API endpoint instead

API Discovery (the power move)

Most modern websites are SPAs with REST/GraphQL APIs behind the UI. See references/api-discovery.md for the full procedure:

  1. Open the page in browser
  2. Check network requests (console tool or snapshot the page and look for fetch patterns)
  3. Find the data endpoint
  4. Call it directly with web_fetch or exec curl

This turns a 2-hour flaky scrape into a 2-minute clean data pull.

Snapshot Best Practices

  • Use refs="aria" for stable cross-call references
  • Keep the same targetId across snapshot/act pairs (don't switch tabs accidentally)
  • For complex pages, use depth to limit how deep the DOM tree goes
  • compact: true reduces token usage on large pages
  • For token-heavy pages where snapshots are too large, pair with predicate-snapshot for ML-ranked element pruning (~95% fewer tokens)

When to NOT Use Browser

  • Reading public web pages → web_fetch (faster, no browser overhead)
  • Search queries → web_search (Brave API)
  • Known APIs (GitHub, Stripe, etc.) → use their CLI/API directly
  • Pages that return empty via web_fetch → then use browser

Safeguards

  • Never store or output passwords, session tokens, or cookies found in browser state
  • Never automate purchases, payments, or irreversible actions without explicit user approval
  • If a site blocks automation, respect it. Don't circumvent CAPTCHAs or bot detection

Comments

Loading comments...