Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

gpu-cluster-monitor

v1.0.2

Monitors GPU cluster health and usage, providing real-time status, performance metrics, and alerts for efficient resource management.

0· 546·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sounderliu/gpu-cluster-monitor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "gpu-cluster-monitor" (sounderliu/gpu-cluster-monitor) from ClawHub.
Skill page: https://clawhub.ai/sounderliu/gpu-cluster-monitor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install gpu-cluster-monitor

ClawHub CLI

Package manager switcher

npx clawhub@latest install gpu-cluster-monitor
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
Name and description claim a GPU cluster monitor, but SKILL.md and the included code implement a 'deep-scraper' (YouTube/X scraping with network interception). A GPU monitoring skill would not need Crawlee/Playwright or instructions for building a Docker scraper. This is a major mismatch.
!
Instruction Scope
The SKILL.md instructs building/running a containerized Playwright/Crawlee scraper that intercepts network requests, clears cookies, and triggers UI interactions to capture hidden APIs/transcripts. That scope goes beyond a resource monitor and includes actions that could capture sensitive network responses or personally identifiable content; the instructions also claim 'penetrate protections' which is concerning.
Install Mechanism
There is no formal install spec, but SKILL.md expects building a Docker image (clawd-crawlee). The manifest does not include a Dockerfile despite instructing the user to keep one in the skill directory. package.json declares crawlee/playwright dependencies and an openclaw docker requirement. Missing Dockerfile and mismatch between registry metadata and instructions is an inconsistency the user should verify.
!
Credentials
The skill requests no environment variables, but requires Docker (privileged capability to run containers) and network access; the scraping code listens to all page network requests and can fetch intercepted URLs. That capability is not justified by the registry name/description and could capture tokens or private API responses if misused.
Persistence & Privilege
always is false and the skill doesn't request system-wide config changes. However, it requires the ability to run Docker containers which grants substantial runtime privileges on the host; run-time container privilege should be considered when evaluating risk.
What to consider before installing
Do not install this expecting a GPU cluster monitor — the skill is actually a containerized deep web scraper. Before proceeding: 1) Verify the author's identity and source (homepage is missing). 2) Ask for the Dockerfile and review it; do not build/run the container until you inspect its contents. 3) Run any testing inside an isolated VM or sandbox with no sensitive mounts and restricted network access. 4) Be aware the code intercepts page network requests and performs UI automation; it can capture API responses or tokens if pointed at authenticated pages. 5) If you wanted GPU monitoring, reject this package and look for a different, clearly named skill that uses nvidia-smi / Prometheus exporters and requests only the credentials it needs. 6) If you must use this scraper, ensure it complies with target sites' terms of service and applicable law, and avoid running it with elevated privileges or mounting host directories containing secrets.

Like a lobster shell, security has layers — review code before you run it.

latestvk97644d1qktjy5j6vbm1wcbffx81s8rm
546downloads
0stars
3versions
Updated 14h ago
v1.0.2
MIT-0

Skill: deep-scraper

Overview

A high-performance engineering tool for deep web scraping. It uses a containerized Docker + Crawlee (Playwright) environment to penetrate protections on complex websites like YouTube and X/Twitter, providing "interception-level" raw data.

Requirements

  1. Docker: Must be installed and running on the host machine.
  2. Image: Build the environment with the tag clawd-crawlee.
    • Build command: docker build -t clawd-crawlee skills/deep-scraper/

Integration Guide

Simply copy the skills/deep-scraper directory into your skills/ folder. Ensure the Dockerfile remains within the skill directory for self-contained deployment.

Standard Interface (CLI)

docker run -t --rm -v $(pwd)/skills/deep-scraper/assets:/usr/src/app/assets clawd-crawlee node assets/main_handler.js [TARGET_URL]

Output Specification (JSON)

The scraping results are printed to stdout as a JSON string:

  • status: SUCCESS | PARTIAL | ERROR
  • type: TRANSCRIPT | DESCRIPTION | GENERIC
  • videoId: (For YouTube) The validated Video ID.
  • data: The core text content or transcript.

Core Rules

  1. ID Validation: All YouTube tasks MUST verify the Video ID to prevent cache contamination.
  2. Privacy: Strictly forbidden from scraping password-protected or non-public personal information.
  3. Alpha-Focused: Automatically strips ads and noise, delivering pure data optimized for LLM processing.

Comments

Loading comments...