Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Browser Automation

v1.0.0

Automate any web browser task with OpenClaw's built-in Playwright browser control. Use when: (1) scraping dynamic pages, (2) filling forms and submitting, (3...

0· 21·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for fuzzyb33s/fuzzy-browser-automation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Browser Automation" (fuzzyb33s/fuzzy-browser-automation) from ClawHub.
Skill page: https://clawhub.ai/fuzzyb33s/fuzzy-browser-automation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install fuzzyb33s/fuzzy-browser-automation

ClawHub CLI

Package manager switcher

npx clawhub@latest install fuzzy-browser-automation
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (browser automation via Playwright-like controls) align with the provided instructions and example actions (navigate, click, type, snapshot, screenshot, evaluate). No unrelated binaries, env vars, or installs are requested — capability is coherent with the stated purpose.
!
Instruction Scope
Instructions explicitly support operating on the host (user) browser profile and include an 'evaluate' action that runs arbitrary JavaScript in page context. Those behaviors go beyond benign scraping: they can read session cookies, localStorage, and page DOM and can perform network requests from the page (potentially exfiltrating data). The SKILL.md does not include any constraints or safe-handling guidance about sensitive data or external transmissions.
Install Mechanism
Instruction-only skill with no install spec or downloaded code — lowest disk/write risk. There are no package installs or third-party downloads to review.
!
Credentials
The skill requests no environment variables or credentials, but it requests access to the host browser profile ('profile="user"' / target='host'), which implicitly grants access to sensitive data (cookies, active sessions, saved credentials) without any declared authorization mechanism. The lack of provenance (unknown source, no homepage) increases the risk because there's no clear trust anchor for granting that privileged access.
Persistence & Privilege
always is false and the skill is user-invocable (normal). The platform default allows autonomous invocation, and that combined with host-browser access and arbitrary JS execution elevates the potential impact if the skill is ever invoked without close supervision. Consider disabling autonomous invocation for this skill if you plan to allow host-profile operations.
What to consider before installing
This skill does what it says (automates browsers) but has sensitive capabilities: it can operate on your real browser profile and run arbitrary JavaScript inside pages, which can read session cookies, saved data, and send data out. Because the skill has no homepage or known publisher, only install/use it if you trust the source. Prefer using the sandbox target rather than 'host'/'profile="user"'. If you must use host automation: (1) require explicit, local user presence and confirmation before any host-target actions, (2) avoid allowing any 'evaluate' calls that run arbitrary JS unless you inspect the function, (3) disable autonomous (background) invocation for this skill or restrict it to manual runs, (4) test all recipes on non-sensitive pages first, and (5) monitor logs for unexpected outbound requests. If you need stronger assurance, ask the publisher for provenance (homepage, source repo, or signed package) before enabling host-profile automation.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c7kynd8dac4exw3nxd1m1cs85e730
21downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0

Browser Automation

Control a Chromium browser directly from OpenClaw — navigate, click, type, snapshot, screenshot, extract data. Works with both the sandboxed OpenClaw-managed browser and your logged-in user browser (with profile="user").

Browser Selection

TargetWhen to Use
sandbox (default)OpenClaw's clean browser — no cookies, no login state
hostBrowser running on the host machine
nodeBrowser on a paired remote node
ProfileWhen to Use
(omit)Clean OpenClaw-managed browser
profile="user"Your own browser with active logins (requires you present)

Core Actions

snapshot — Inspect the Page

browser(action="snapshot", target="sandbox")

Returns the full page DOM as a structured tree. Use refs="aria" for screen-reader-friendly selectors, refs="role" (default) for role+name based refs.

browser(
  action="snapshot",
  target="sandbox",
  refs="aria"
)

screenshot — Capture the Page

browser(action="screenshot", target="sandbox")

For full-page screenshots:

browser(
  action="screenshot",
  target="sandbox",
  fullPage=true
)

navigate — Open a URL

browser(action="navigate", target="sandbox", url="https://news.ycombinator.com")

act — Interact with Elements

The act action is the workhorse. It combines ref (what to target) + kind (action type) + request (action details).

Click:

browser(
  action="act",
  target="sandbox",
  ref="aria:Submit",
  request={"kind": "click"}
)

Type:

browser(
  action="act",
  target="sandbox",
  ref="id:search-box",
  request={"kind": "type", "text": "openclaw browser automation"}
)

Press a key:

browser(
  action="act",
  target="sandbox",
  ref="id:search-box",
  request={"kind": "press", "key": "Enter"}
)

Hover:

browser(
  action="act",
  target="sandbox",
  ref="css:.dropdown-menu",
  request={"kind": "hover"}
)

Select from dropdown:

browser(
  action="act",
  target="sandbox",
  ref="id:country-select",
  request={"kind": "select", "values": ["South Africa"]}
)

Wait for element:

browser(
  action="act",
  target="sandbox",
  ref="aria:Loading",
  request={"kind": "wait", "timeMs": 5000}
)

Locator Reference (ref types)

PrefixExampleBest For
aria:aria:SubmitAccessible labels, buttons with text
id:id:email-inputUnique element IDs
css:css:.card:nth-child(2)Complex CSS selectors
role:role:button[name="Submit"]Semantic role selectors
text:text:Get StartedVisible text content
xpath:xpath://button[@class="btn"]Fallback for complex paths

For stable refs across calls, prefer refs="aria" in snapshots — these use ARIA labels that rarely change.

Recipes

Recipe 1: Scrape a Dynamic Page

// 1. Navigate
browser(action="navigate", target="sandbox", url="https://news.ycombinator.com/news")

// 2. Wait for content to load
browser(
  action="act",
  target="sandbox",
  loadState="networkidle",
  ref="css:.itemlist",
  request={"kind": "wait", "timeMs": 3000}
)

// 3. Snapshot to extract structured data
browser(action="snapshot", target="sandbox", refs="aria")

Recipe 2: Fill and Submit a Form

// 1. Navigate to form
browser(action="navigate", target="sandbox", url="https://example.com/contact")

// 2. Fill inputs
browser(action="act", target="sandbox", ref="id:name",    request={"kind": "fill", "text": "Alice Smith"})
browser(action="act", target="sandbox", ref="id:email",   request={"kind": "fill", "text": "alice@example.com"})
browser(action="act", target="sandbox", ref="id:message", request={"kind": "fill", "text": "Hi, I'd like to know more..."})

// 3. Click submit
browser(action="act", target="sandbox", ref="aria:Submit", request={"kind": "click"})

// 4. Wait for confirmation
browser(
  action="act",
  target="sandbox",
  ref="aria:Thank you",
  request={"kind": "wait", "timeMs": 2000}
)

Recipe 3: Login to a Service (User Browser)

// Requires you to be present at the machine — uses your actual browser session
browser(action="navigate", target="host", url="https://github.com/login")

browser(action="act", target="host", ref="id:login_field", request={"kind": "fill", "text": "myuser"})
browser(action="act", target="host", ref="id:password",    request={"kind": "fill", "text": "mypassword"})
browser(action="act", target="host", ref="css:[type=submit]", request={"kind": "click"})

Recipe 4: Monitor Price / Availability

// Navigate and wait for price to update
browser(action="navigate", target="sandbox", url="https://example.com/product/123")

browser(
  action="act",
  target="sandbox",
  ref="css:.price",
  request={"kind": "wait", "timeMs": 10000}
)

// Capture screenshot
browser(action="screenshot", target="sandbox")

// Evaluate for price text
browser(
  action="act",
  target="sandbox",
  request={
    "kind": "evaluate",
    "fn": "() => document.querySelector('.price').innerText"
  }
)

Recipe 5: Multi-Tab Workflow

// Open new tab
browser(action="navigate", target="sandbox", url="https://mail.google.com")

// Switch tabs
browser(action="act", target="sandbox", request={"kind": "press", "key": "Control+Tab"})

// Close current tab
browser(action="act", target="sandbox", request={"kind": "press", "key": "Control+W"})

Recipe 6: Scroll and Load Lazy Content

// Scroll by a pixel amount
browser(
  action="act",
  target="sandbox",
  request={
    "kind": "evaluate",
    "fn": "() => window.scrollBy(0, 800)"
  }
)

// Scroll to bottom (infinite scroll pages)
browser(
  action="act",
  target="sandbox",
  request={
    "kind": "evaluate",
    "fn": "() => window.scrollTo(0, document.body.scrollHeight)"
  }
)

Recipe 7: Extract Table Data

browser(action="navigate", target="sandbox", url="https://example.com/sales-report")

browser(
  action="act",
  target="sandbox",
  ref="css:table",
  request={"kind": "wait", "timeMs": 2000}
)

browser(
  action="act",
  target="sandbox",
  request={
    "kind": "evaluate",
    "fn": "() => Array.from(document.querySelectorAll('table tr')).map(row => Array.from(row.querySelectorAll('td')).map(cell => cell.innerText))"
  }
)

Recipe 8: Download a File

browser(action="navigate", target="sandbox", url="https://example.com/export.csv")

browser(
  action="act",
  target="sandbox",
  request={
    "kind": "evaluate",
    "fn": "() => { const link = document.querySelector('a[href$=\".csv\"]'); return link ? link.href : null; }"
  }
)

Action Reference

ActionWhat It Does
snapshotGet structured page DOM
screenshotCapture page as PNG/JPEG
navigateOpen a URL
actClick, type, press, hover, select, wait, evaluate
pdfGenerate PDF of the page
consoleRead browser console logs
openOpen a new tab
closeClose current tab

act kind Reference

KindParameters
click
typetext
filltext
presskey (e.g. "Enter", "Escape", "Control+Tab")
hover
selectvalues (array)
waittimeMs
evaluatefn (JavaScript string)
dragstartRef, endRef
resizewidth, height
close

Anti-Patterns

  • Don't click before the page loads — always navigate then wait for loadState="networkidle" or an explicit element wait
  • Don't use hard pixel waits — prefer waiting for a specific element or networkidle state
  • Don't scrape without rate limiting — add timeMs waits between actions to avoid IP blocks
  • Don't use profile="user" for automated workflows — it's meant for attended use; automated flows should use the sandbox browser
  • Don't use xpath unless nothing else works — xpath selectors break easily when the page changes

Troubleshooting

SymptomFix
"Target closed" errorBrowser timed out — navigate again
Element not foundPage may be JS-rendered — add loadState="networkidle" or explicit wait
Click missed the buttonUse ref="aria:Button Text" instead of CSS — more robust
Stale element referenceElement was replaced by a DOM update — re-snapshot and retry
Form submits twiceWait for navigation after submit before continuing
Screenshot is blankPage still loading — add loadState="networkidle"
profile="user" not workingThe logged-in browser must already be running; start it manually first

See Also

  • webhook-automation skill — combining browser-extracted data with outgoing webhooks
  • rss-aggregator skill — using browser scraping as a fallback when feeds aren't available
  • cron-scheduler skill — scheduling browser-based monitoring tasks

Comments

Loading comments...