Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI Cluster Pre-flight Check

v1.0.1

Pre-flight check for GPU cluster nodes — node validation before training, check cluster node health, is my GPU node ready. 26 health checks covering GPU, PCI...

0· 192·0 current·0 all-time
byXperf Inc.@ops-xperf

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ops-xperf/xperf-pre-flight.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AI Cluster Pre-flight Check" (ops-xperf/xperf-pre-flight) from ClawHub.
Skill page: https://clawhub.ai/ops-xperf/xperf-pre-flight
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: bash, jq
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install xperf-pre-flight

ClawHub CLI

Package manager switcher

npx clawhub@latest install xperf-pre-flight
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description match the included scripts: the code implements ~26 local and cross‑node hardware and config checks (GPU, PCIe, RDMA, NUMA, firewall, BIOS, switch, etc.). Required binaries (GPU vendor tools) are appropriate. However the manifest under‑declares other binaries the scripts use (ssh, ip, lspci, ethtool, setpci, dmidecode, ipmitool, docker, etc.), and declares jq as a required binary although the scripts implement JSON formatting without jq. Also the registry marks PREFLIGHT_NODE_ID as the "primary credential" despite it being a harmless node identifier.
!
Instruction Scope
Runtime instructions (run preflight.sh) are explicit and the scripts do many privileged/local inspections: reading /proc/cmdline, /sys entries, lspci, lsmod, setpci, dmidecode, ipmitool, ip/ethtool, and more. Cross‑node checks will attempt SSH to the IPs you provide. The script also runs docker run for vendor test images (nvidia/cuda, rocm/*) which will pull and execute container images from the network. These behaviors are coherent with the stated purpose but amount to network activity and code execution on the host (docker images) and attempts to access low‑level system interfaces that may require root. The SKILL.md does not explicitly warn about pulling/running container images or requiring root privileges for some checks.
Install Mechanism
No install spec — bundled as scripts. That keeps install risk low (no arbitrary archive download during install). Files are present in the skill package, so nothing is fetched during install; however runtime docker actions may fetch images from registries.
Credentials
No sensitive credentials are required. The primaryEnv is PREFLIGHT_NODE_ID which is just an identifier. The skill exposes many optional environment variables (PREFLIGHT_PEER_IPS, SWITCH_HOST, SWITCH_CLI_CMD, SWITCH_USER, etc.) which are relevant to cross‑node and switch checks. Nothing requests unrelated cloud/API credentials. However the registry's labeling of PREFLIGHT_NODE_ID as a "primary credential" is misleading.
Persistence & Privilege
always is false and the skill does not request permanent presence or modify other skills. It can be invoked autonomously (default) which is normal — note this combined with network and docker execution increases operational impact, but autonomy alone is not flagged.
What to consider before installing
This skill appears to do what it claims (local and cross‑node hardware/config checks), but take precautions before running it on production hosts: - Run it in a safe environment first (a non‑production node or an isolated VM) to observe behavior. - Be aware it may require root for full coverage (dmidecode, setpci, ipmitool, reading /dev/mem, etc.). Without root some checks will fail or be skipped. - Cross‑node checks (PREFLIGHT_PEER_IPS) will attempt SSH to the IPs you supply; those are outbound connections from the node. If you set SWITCH_HOST or similar it may attempt SSH to switches. - The script will run docker run with public vendor images (nvidia/cuda, rocm images). That will pull and execute container code from registries — review those image names and ensure pulling external images is acceptable in your environment. - The manifest under‑declares some runtime binaries (ssh, ip, lspci, ethtool, setpci, dmidecode, ipmitool, docker). Ensure required tooling is present and acceptable. - PREFLIGHT_NODE_ID is not a secret; the registry's labeling as a primary credential is misleading. If you want to proceed: review the bundled scripts (they are included), run with PREFLIGHT_PEER_IPS unset (local checks only) and without PREFLIGHT_STRICT first, and consider auditing or removing the docker tests if pulling/executing images is unacceptable in your environment.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔍 Clawdis
OSLinux
Binsbash, jq
Any binnvidia-smi, amd-smi, rocm-smi
Primary envPREFLIGHT_NODE_ID
latestvk971f9z6772vtw02cgm3cq8was833jrh
192downloads
0stars
2versions
Updated 21h ago
v1.0.1
MIT-0
Linux

AI Cluster Pre-flight Check

By Xperf Inc. — part of the ClusterReady automatic cluster and rack validation platform.

AI-powered GPU cluster pre-flight validation skill. Run 26 GPU cluster node readiness checks locally on a bare-metal node. Auto-detects GPU vendor (NVIDIA / AMD) and network type (InfiniBand / RoCE / Ethernet). Validates GPU detection, PCIe topology, RDMA/InfiniBand networking, Docker, IOMMU, NUMA, firewall, BIOS, and more.

Example Prompts

  • "Run a pre-flight check on this node"
  • "Is my GPU node ready for training?"
  • "Check cluster node health"
  • "Validate this node before training"
  • "Are my GPUs and network configured correctly?"
  • "Run all health checks on this bare-metal node"
  • "Check if GPUDirect and RDMA are working"
  • "Is IOMMU in passthrough mode?"

When to Use

  • Before running GPU workloads — ensure the node is properly configured
  • After provisioning — validate new bare-metal nodes are cluster-ready
  • Periodic health monitoring — catch hardware or config drift
  • Troubleshooting — run specific checks to diagnose issues

How to Run

Run all checks:

bash {baseDir}/preflight.sh

Run specific checks only:

PREFLIGHT_CHECKS=1.3,1.4,1.5 bash {baseDir}/preflight.sh

Run in strict mode (no skippable failures):

PREFLIGHT_STRICT=true bash {baseDir}/preflight.sh

Enable cross-node checks:

PREFLIGHT_PEER_IPS=10.0.1.11,10.0.1.12 bash {baseDir}/preflight.sh

Environment Variables

VariableRequiredDefaultDescription
PREFLIGHT_NODE_IDNohostnameNode identifier in output
PREFLIGHT_CHECKSNoallComma-separated check subset (e.g. 1.2,1.3,1.5)
PREFLIGHT_STRICTNofalseTreat skippable failures as errors
PREFLIGHT_PEER_IPSNoComma-separated peer IPs for cross-node checks (1.1, 1.26, mesh ping)
MOUNT_POINTNoauto-detectShared filesystem path for check 1.10
SWITCH_HOSTNoSwitch hostname for check 1.25
SWITCH_CLI_CMDNoDirect switch CLI command for check 1.25
SWITCH_USERNoadminSwitch SSH user
SWITCH_SHOW_CMDNoshow interface statusSwitch show command

Output

JSON to stdout, diagnostics to stderr. Pipe through jq for pretty-printing:

bash {baseDir}/preflight.sh 2>/dev/null | jq .

Exit Codes

  • 0 — All checks passed (or skipped)
  • 1 — One or more checks failed
  • 2 — Configuration error

Check Catalog

IDNameScopeDescription
1.1SSH ConnectivityCross-nodeTests SSH to peer nodes
1.2Docker ServiceLocalVerifies Docker daemon is running
1.3GPU DetectionLocalDetects NVIDIA or AMD GPUs
1.4GPU Count and ModelLocalReports GPU count and model
1.5GPU Driver VersionLocalReports GPU driver version
1.6OS and KernelLocalReports OS and kernel version
1.7Network TypeLocalDetects InfiniBand, RoCE, or Ethernet
1.8IB/NIC PortsLocalChecks network port status
1.9GPU Container ToolkitLocalTests GPU access inside Docker
1.10Shared FilesystemLocalValidates shared mount read/write
1.11GPUDirect Kernel ModuleLocalChecks RDMA peer memory modules
1.12IOMMU PassthroughLocalValidates IOMMU in passthrough mode
1.13NUMA BalancingLocalReports NUMA balancing setting
1.14Firewall DisabledLocalChecks firewall is inactive
1.15PCIe Link Speed/WidthLocalValidates PCIe link negotiation
1.16PCIe ACS DisabledLocalChecks ACS on PCI bridges
1.17GPU-NIC PCIe AffinityLocalReports GPU/NIC topology
1.18NIC FirmwareLocalReports NIC firmware version
1.19BIOS SettingsLocalValidates BIOS/IPMI settings
1.20Link QualityLocalChecks error counters
1.21Transceiver HealthLocalChecks optical module health
1.22MTU ConfigurationLocalValidates jumbo frames (MTU >= 9000)
1.23Fabric TopologyLocalDiscovers IB fabric topology
1.24RoCE QoS/NIC ConfigLocalReports QoS and GID configuration
1.25Switch QoSCross-nodeValidates switch configuration
1.26Network RoutingCross-nodeTests ping to peer nodes
meshL3 Mesh PingCross-nodeTests connectivity to all peers

Interpreting Results

  • pass — Check succeeded
  • fail — Check failed (node may not be ready)
  • skip — Check failed with a known non-critical signature (e.g. missing optional tool). In strict mode, these become failures.

Checks auto-detect GPU vendor (NVIDIA vs AMD) and network type (InfiniBand vs RoCE vs Ethernet), running the appropriate commands for each.

Support

Comments

Loading comments...