free_google_search_with_browser
v0.0.1Search Google using scrapling and return structured results (title, link, snippet). Invoke when user asks to search Google or find information online. Your d...
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name/description (search Google using a browser-like fetcher) aligns with the included Python scripts and requirements. The code implements browser automation (StealthyFetcher) and returns title/link/snippet results; none of the declared or implicit requirements are out-of-scope for a search-by-browser skill.
Instruction Scope
SKILL.md and the scripts instruct running local Python scripts, installing dependencies, and launching a visible browser (headless=False). The runtime will open GUI browser windows and the verify script opens multiple searches for testing. The code configures a local storage file (scrapling_storage.db) and sets solve_cloudflare=True (attempts to handle/solve Cloudflare protections). These behaviors are coherent with a stealthy scraper but have operational and policy implications (visible browser windows, stored local state, possible CAPTCHAs or anti-bot circumvention).
Install Mechanism
No registry install spec is provided; SKILL.md directs using pip install -r requirements.txt and running 'playwright install'. This is a common, expected install approach for Playwright-based Python projects. No downloads from untrusted URLs or archive extraction are present in the registry metadata; the risk is standard package-install risk from PyPI.
Credentials
The skill declares no environment variables or external credentials and the code does not access secrets. It writes a local database file (scrapling_storage.db) for storage, which is proportionate to the scraper's operation. No unrelated credentials or config paths are requested.
Persistence & Privilege
always is false and the skill does not request elevated or persistent platform privileges. It does not modify other skills or global agent settings; its persistence is limited to creating a local storage file in the working directory.
Assessment
This skill appears to do what it says, but consider the following before installing:
- It launches visible browser windows (headless=False). Run it on a machine with a GUI or use a virtual display (xvfb) or VM if needed.
- It requires installing Python packages and Playwright, which will download browser binaries (run 'playwright install'). Review and trust the packages on PyPI before installing.
- The code writes a local file named scrapling_storage.db in the working directory; that may contain state or cookies—treat it as local data that could include browsing artifacts.
- The scraper enables solve_cloudflare/adaptive stealth settings to bypass protections. That may trigger CAPTCHAs, violate Google/website terms of service, or have legal/ethical implications. Avoid running this against services where you lack permission.
- The verification script will open multiple browser windows; run it interactively (not on shared servers) to inspect behavior.
- If you intend to run this on a server, prefer an isolated environment (VM or container), and audit the installed dependencies. If you need headless operation, you must modify the script and ensure a headless-capable environment.
If you want more assurance, request a line-by-line code review of any third-party libraries (scrapling, browserforge, etc.) and confirm whether solve_cloudflare behavior is acceptable for your use case.Like a lobster shell, security has layers — review code before you run it.
latest
Google Search
This skill searches Google using a stealthy fetcher and returns structured results suitable for LLM consumption.
Usage
Run the python script google_search.py with the query as an argument.
python google_search.py "<query>"
File Structure
- google_search.py: The main script. It uses
scraplingto perform the Google search. It launches a browser instance to fetch results, ensuring high success rates by mimicking real user behavior. - verify_search.py: A debugging script. It runs a predefined set of queries to verify that the search functionality works correctly.
- requirements.txt: Lists the Python dependencies required for the project.
Requirements
- Python 3
scraplingpackage installed (withplaywrightandcurl_cffidependencies)
To install dependencies:
pip install -r requirements.txt
playwright install # Required for browser automation. If slow, consider downloading manually.
Notes & Troubleshooting
Browser Environment (Headless=False)
This skill is configured to run with headless=False (see google_search.py). This means:
- GUI Required: The environment where this code runs must support a Graphical User Interface (GUI). It will launch a visible browser window.
- No Headless Servers: It will likely fail on headless servers (like standard CI/CD runners or SSH-only servers) unless X11 forwarding or a virtual display (like
xvfb) is configured.
Debugging with verify_search.py
If you encounter issues or want to test if the setup is working:
- Run
python verify_search.py. - This script will execute several test queries (e.g., "python tutorial", mixed English/Chinese).
- Watch the browser window to see if it opens and loads Google results.
- Check the console output for success messages or error logs.
Comments
Loading comments...
