Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Super Lobster

v1.0.0

Performs aggressive web research and data extraction via local scripts, browser rendering, crawling, and command execution on a China-networked gateway.

0· 94·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mrherojack/super-lobster.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Super Lobster" (mrherojack/super-lobster) from ClawHub.
Skill page: https://clawhub.ai/mrherojack/super-lobster
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install super-lobster

ClawHub CLI

Package manager switcher

npx clawhub@latest install super-lobster
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the included tools: fetch, render, extract, and crawl scripts for aggressive web research. However, the skill invokes /usr/bin/google-chrome-stable and requires Python libraries (requests, BeautifulSoup, trafilatura) but declares no required binaries or dependencies — a mismatch between claimed needs and actual files.
!
Instruction Scope
SKILL.md explicitly allows writing and executing arbitrary Python/shell programs under /root/.openclaw/workspace/memory/tmp and running local commands (including headless Chrome). That grants the agent broad ability to run arbitrary code and make network requests beyond narrowly-scoped scraping, which could be used to exfiltrate data or perform other actions outside the stated task.
Install Mechanism
No install spec (instruction-only) and code files are included in the skill bundle. This minimizes remote-download risk, but the runtime depends on host-provisioned binaries (chrome) and Python packages that may not exist — the skill assumes host tooling that isn't declared.
!
Credentials
The skill requests no credentials or env vars, yet the tools will perform arbitrary outbound network requests and require host executables and Python packages. The lack of declared required binaries/deps and the instruction to write code as root are disproportionate to a simple scraper and reduce transparency about needed privileges.
!
Persistence & Privilege
always is false (good), but the SKILL.md directs creating and executing scripts under /root paths. That effectively grants persistent local execution capability on the gateway for this skill's user context, increasing blast radius if the skill or its outputs are malicious or flawed.
What to consider before installing
This skill provides useful scraping tools, but it also tells the agent to write and run arbitrary code as root and to invoke host Chrome without declaring that dependency. Before installing: 1) Verify the host has /usr/bin/google-chrome-stable and the required Python packages (requests, bs4, trafilatura) or run in an isolated sandbox. 2) Audit the bundled scripts line-by-line (they are short) and fix any bugs; note fetch_url.py contains a likely string/newline bug. 3) Restrict where the agent can write and execute code (avoid /root if possible) and limit outbound network access to only needed endpoints. 4) If you don't fully trust the skill author or the gateway environment, do not enable this skill on production or sensitive hosts.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bdfdeqhtkz8ex0pze99nmj983y68t
94downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Super Lobster

Use this skill when tasks require aggressive web research, browser rendering, local scripting, crawling, extraction, and command execution on this gateway.

Operating model

  • This host is in a China network environment. Do not assume western search engines or API endpoints are reachable or stable.
  • Prefer first-party sites, direct URLs, mirrors, and browser rendering over public search APIs.
  • Prefer local execution on the gateway for scraping, parsing, coding, and automation.

Default workflow

  1. If a direct URL is known, fetch it with fetch_url.py.
  2. If only the readable article body matters, use extract_main_text.py.
  3. If the page is JS-heavy or renders differently in browsers, use render_url.py or chrome_dump_dom.sh.
  4. If you need to discover more pages inside the same site, use crawl_site.py.
  5. For multi-step processing, write a Python script under /root/.openclaw/workspace/memory/tmp and run it.

Tools

  • /root/.openclaw/workspace/skills/super-lobster/bin/fetch_url.py <url> Returns metadata plus a text and HTML preview.
  • /root/.openclaw/workspace/skills/super-lobster/bin/extract_main_text.py <url> Extracts the readable main content.
  • /root/.openclaw/workspace/skills/super-lobster/bin/chrome_dump_dom.sh <url> Dumps browser-rendered DOM with Chrome headless.
  • /root/.openclaw/workspace/skills/super-lobster/bin/render_url.py <url> Python wrapper around Chrome headless DOM rendering.
  • /root/.openclaw/workspace/skills/super-lobster/bin/crawl_site.py <url> --limit 20 Same-site link discovery crawl.

Browser rules

  • Prefer fetch_url.py or extract_main_text.py first for static pages.
  • Use browser rendering only when static fetch is incomplete or JS-dependent.
  • Do not depend on OpenClaw's built-in browser RPC in this environment unless explicitly verified working in the current session.

Coding and execution rules

  • You may write and execute Python or shell programs locally on the gateway.
  • Prefer Python for scraping, parsing, and data cleanup.
  • Keep scratch outputs in /root/.openclaw/workspace/memory/tmp.
  • Remove clearly temporary files after use unless they are likely to be reused.

Network rules

  • Avoid default reliance on DuckDuckGo, Tavily, Exa, or similar public search providers unless reachability is verified first.
  • If provider keys are added later and connectivity is stable, treat them as accelerators, not hard dependencies.

Comments

Loading comments...