Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ghost Closer Web Scraper

v1.0.0

Scrape complete business intelligence from Google Maps, Facebook, and Instagram for any local business. Returns structured JSON with ratings, contact info, s...

0· 72·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dreamsarts/ghost-closer-web-scraper.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ghost Closer Web Scraper" (dreamsarts/ghost-closer-web-scraper) from ClawHub.
Skill page: https://clawhub.ai/dreamsarts/ghost-closer-web-scraper
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ghost-closer-web-scraper

ClawHub CLI

Package manager switcher

npx clawhub@latest install ghost-closer-web-scraper
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to only scrape public business data from Google Maps, Facebook, and Instagram, which would not normally require access to a user's .env file or their running Chrome profile. The SKILL.md lists no required environment variables in the registry metadata, yet the code explicitly loads a hardcoded .env path (/Users/edwin/.openclaw/workspace/dreams-arts/.env). That is disproportionate to the stated purpose and inconsistent with the declared requirements.
!
Instruction Scope
Runtime instructions tell the agent to execute scraper.py and to connect to an existing Chrome on port 9222. The script connects to the user's running Chrome via CDP, which gives access to browser cookies, local authenticated sessions, and any data accessible in that profile (potentially enabling scraping of private content). The script also loads a local .env file (hardcoded path) at startup. Those actions expand scope beyond merely fetching public pages and are not documented or justified in the registry metadata.
Install Mechanism
No install spec in the registry (instruction-only with a code file). The SKILL.md requires Playwright and a Chrome instance with remote debugging; Playwright is an expected dependency for automated browsing. There is no automated installer, which lowers some risk, but the skill expects the environment to be set up in a way that grants it broad access (connected Chrome).
!
Credentials
Registry shows no required env vars, but the code loads a hardcoded .env file from /Users/edwin/.openclaw/workspace/dreams-arts/.env. That file may contain secrets unrelated to scraping (API keys, tokens). In addition, connecting to an existing Chrome instance can expose session cookies and tokens for logged-in accounts (Facebook/Instagram), which is a high-privilege access not justified by the public-data scraping description.
Persistence & Privilege
The skill is not always-enabled and uses default autonomous invocation. It does not request system-wide persistent installation in the manifest. However, the runtime behavior (attaching to a running browser process) gives it effective access to local authenticated state during execution — a runtime privilege that should be treated as sensitive.
What to consider before installing
This skill is 'suspicious' because it silently reads a hardcoded .env file and attaches to your existing Chrome browser (remote debugging), which can expose cookies and credentials. Before installing or running it: 1) Inspect the full scraper.py for any network calls or hidden endpoints (the distributed snippet doesn't show exfiltration but you must verify the remainder). 2) Do not run it against your personal Chrome profile — if you must test, start a dedicated ephemeral Chrome with remote debugging and no logged-in accounts. 3) Open and review the .env file referenced by the script; do not allow the script to read any .env containing secrets. 4) Prefer running in an isolated VM/container with no sensitive credentials or profiles mounted. 5) Ask the publisher why the .env path is hardcoded and why the skill requires attachment to an existing browser; ask for an option to launch a fresh, controlled browser context instead. 6) Consider using official APIs (Google/Facebook/Instagram) instead of automated scraping to avoid privacy and terms-of-service issues. If you cannot validate the code and the author’s rationale, avoid using this skill with real credentials or production data.

Like a lobster shell, security has layers — review code before you run it.

latestvk97a2gr74z2501pjbzm7bjy7b984g9nh
72downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Ghost Closer Web Scraper

Purpose

Automates the research phase of the Ghost Closer workflow. Given a business name and location, this skill scrapes Google Maps, Facebook, and Instagram to build a complete business intelligence profile in structured JSON.

Requirements

  • Python 3.10+
  • Playwright (pip install playwright)
  • Chrome running with remote debugging on port 9222
  • .env file at /Users/edwin/.openclaw/workspace/dreams-arts/.env

Usage

From Command Line

python scraper.py "Business Name" "City, State"

From Python

from scraper import GhostCloserScraper

scraper = GhostCloserScraper()
result = await scraper.run("La Taza Coffee", "Caguas, PR")
print(result)

Output Format

{
  "business_name": "La Taza Coffee",
  "location_query": "Caguas, PR",
  "google_maps": {
    "name": "La Taza Coffee Shop",
    "rating": 4.7,
    "review_count": 312,
    "address": "123 Calle Comercio, Caguas, PR 00725",
    "phone": "+1-787-555-1234",
    "website": "https://latazacoffee.com",
    "hours": {"Mon": "7AM-9PM", "Tue": "7AM-9PM"},
    "categories": ["Coffee shop", "Cafe"],
    "photo_urls": ["https://..."]
  },
  "facebook": {
    "page_url": "https://facebook.com/latazacoffee",
    "followers": 2450,
    "likes": 2300,
    "logo_url": "https://...",
    "recent_posts": [
      {"text": "New seasonal blend!", "date": "2026-04-05", "likes": 45}
    ]
  },
  "instagram": {
    "handle": "@latazacoffee",
    "profile_url": "https://instagram.com/latazacoffee"
  },
  "services_or_menu": ["Espresso $3.50", "Latte $4.75"],
  "scraped_at": "2026-04-09T14:30:00Z"
}

How Claude Should Use This Skill

  1. Identify the business: Extract the business name and location from the user's request.
  2. Run the scraper: Execute python scraper.py "Business Name" "City, State" via Bash.
  3. Parse the JSON output: The script prints valid JSON to stdout.
  4. Use the data: Feed into Ghost Closer page builder, lead generation, or competitive analysis.

Error Handling

  • If Google Maps returns no results, the google_maps field will be null.
  • If Facebook page is not found, facebook will be null.
  • Network errors are retried up to 3 times with exponential backoff.
  • All errors are logged to stderr; stdout always contains valid JSON.

Notes

  • Connects to existing Chrome on port 9222 (never launches a new browser).
  • Respects rate limits with built-in delays between requests.
  • Photos are returned as URLs only (not downloaded).

Comments

Loading comments...