Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Web Pilot

Search the web and read page contents without API keys. Use when you need to search via DuckDuckGo/Brave/Google (multi-page), extract readable text from URLs...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 5.4k · 76 current installs · 80 all-time installs
byLiran Udi@LiranUdi
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the actual code and scripts: search scrapers, Playwright-based page reader, persistent browser session, and file downloader. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md and the scripts line up: they open arbitrary URLs, run JS in page contexts (EXTRACT_JS, COOKIE_DISMISS_JS), auto-dismiss cookie banners, click/fill/execute JS, and download files. These behaviors are expected for a browsing/automation tool, but they give the agent capability to interact with pages (including clicking consent buttons and executing arbitrary page JS) and to fetch and save arbitrary remote content.
Install Mechanism
No install spec in registry (instruction-only), but README and SKILL.md require pip packages and `playwright install chromium`, which will download and install Chromium binaries. This is expected for Playwright-based tools but does perform a large network download during setup.
Credentials
No environment variables, credentials, or external tokens are requested. The tool runs locally and its resource requests (Playwright, requests, optional PDF libs) are proportionate to the stated functionality.
!
Persistence & Privilege
The persistent session creates a Unix domain socket at /tmp/web-pilot-browser.sock, a PID file, and writes /tmp/web-pilot-initial.json and downloads to /tmp. A local Unix socket without explicit permission controls can be connected to by other local users/processes on the same host, which could allow command injection (navigate/extract/screenshot/eval) via that socket. This is expected for a long-running local helper but is an operational security consideration.
Assessment
This skill appears to be what it claims: a local Playwright-based web search, reader, and automation tool. Before installing: (1) review the code if you will run it on a multi-user machine — the persistent server opens a Unix socket in /tmp and writes PID/files there, which could be accessed by other local users; consider running it inside a dedicated user account or container. (2) Be aware it auto-clicks cookie-consent buttons (privacy implication) and can execute arbitrary JS in page contexts via the session's eval command — only allow trusted agents to invoke it. (3) Downloads and PDF extraction write files under /tmp (or configured output) — verify destination and cleanup policies. (4) The install step runs `playwright install chromium`, which downloads browser binaries; validate network policy for such downloads. If you need stronger isolation, run the skill in a sandboxed environment (container/VM) or restrict access to the Unix socket.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk979cpbbrses16q5bj6s6t96zd81bqwp

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Web Pilot

Four scripts, zero API keys. All output is JSON by default.

Dependencies: requests, beautifulsoup4, playwright (with Chromium). Optional: pdfplumber or PyPDF2 for PDF text extraction.

Install: pip install requests beautifulsoup4 playwright && playwright install chromium

1. Search the Web

python3 scripts/google_search.py "query" --pages N --engine ENGINE
  • --engineduckduckgo (default), brave, or google
  • Returns [{title, url, snippet}, ...]

2. Read a Page (one-shot)

python3 scripts/read_page.py "https://url" [--max-chars N] [--visible] [--format json|markdown|text] [--no-dismiss]
  • --formatjson (default), markdown, or text
  • Auto-dismisses cookie consent banners (skip with --no-dismiss)

3. Persistent Browser Session

python3 scripts/browser_session.py open "https://url"              # Open + extract
python3 scripts/browser_session.py navigate "https://other"        # Go to new URL
python3 scripts/browser_session.py extract [--format FMT]          # Re-read page
python3 scripts/browser_session.py screenshot [path] [--full]      # Save screenshot
python3 scripts/browser_session.py click "Submit"                  # Click by text/selector
python3 scripts/browser_session.py search "keyword"                # Search text in page
python3 scripts/browser_session.py tab new "https://url"           # Open new tab
python3 scripts/browser_session.py tab list                        # List all tabs
python3 scripts/browser_session.py tab switch 1                    # Switch to tab index
python3 scripts/browser_session.py tab close [index]               # Close tab
python3 scripts/browser_session.py dismiss-cookies                 # Manually dismiss cookies
python3 scripts/browser_session.py close                           # Close browser
  • Cookie consent auto-dismissed on open/navigate
  • Multiple tabs supported — open, switch, close independently
  • Search returns matching lines with line numbers
  • Extract supports json/markdown/text output

4. Download Files

python3 scripts/download_file.py "https://example.com/doc.pdf" [--output DIR] [--filename NAME]
  • Auto-detects filename from URL/headers
  • PDFs: extracts text if pdfplumber/PyPDF2 installed
  • Returns {status, path, filename, size_bytes, content_type, extracted_text}

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…