Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

GPU Keepalive with KeepGPU

v1.0.0

Install and operate KeepGPU for GPU keep-alive with both blocking CLI and non-blocking service workflows. Use when users ask for keep-gpu command constructio...

0· 329·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the instructions: the SKILL.md explains installing KeepGPU, checking for GPUs, and running blocking or service modes. Required resources (CUDA/ROCm, PyTorch) are appropriate for a GPU keep-alive tool.
Instruction Scope
Runtime instructions are narrowly scoped to installing, starting, inspecting, and stopping KeepGPU; referenced commands and files (torch.cuda.device_count(), nvidia-smi, nohup/tmux, keepgpu.log, keepgpu.pid) are relevant to the stated task. There are no instructions to read unrelated user files or exfiltrate data.
Install Mechanism
The skill recommends pip installs from PyTorch's wheel index and either PyPI or a GitHub repo. These are expected for Python tooling, but pip installing directly from a Git URL will execute the package's install scripts on the machine — the user should trust the repository or prefer an official PyPI release or review the source before installing.
Credentials
No environment variables, credentials, or config paths are requested. The instructions only require local GPU drivers/runtimes and typical command-line tools, which are proportional to the functionality.
Persistence & Privilege
The skill does not request elevated platform privileges and 'always' is false. However, service/non-blocking usage will create background processes and may open a local dashboard port (127.0.0.1:8765); users should be aware these processes persist until stopped and may conflict with cluster policies.
Assessment
This skill appears to do what it says: install and run KeepGPU. Before installing, consider: (1) prefer the published PyPI release if available; (2) if you use pip install from the GitHub URL, review the repository (setup scripts and entry points) because pip install from a remote repo runs code on your machine; (3) run installs in a virtualenv or container if you are unsure; (4) be aware that service mode spawns persistent background processes and exposes a local dashboard on port 8765 — ensure this fits your environment and cluster policies; (5) verify the repository owner/maintainer and check for recent activity or issues if you will install on a production node.

Like a lobster shell, security has layers — review code before you run it.

latestvk97221hjjz5939y4vwk457792h82bwsf
329downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

KeepGPU CLI Operator

Use this workflow to run keep-gpu safely and effectively.

Prerequisites

  • Confirm at least one GPU is visible (python -c "import torch; print(torch.cuda.device_count())").
  • Run commands in a shell where CUDA/ROCm drivers are already available.
  • Use Ctrl+C to stop KeepGPU and release memory cleanly.

Install KeepGPU

Install PyTorch first for your platform, then install KeepGPU.

Option A: Install from package index

# CUDA example (change cu121 to your CUDA version)
pip install --index-url https://download.pytorch.org/whl/cu121 torch
pip install keep-gpu
# ROCm example (change rocm6.1 to your ROCm version)
pip install --index-url https://download.pytorch.org/whl/rocm6.1 torch
pip install keep-gpu[rocm]

Option B: Install directly from Git URL (no local clone)

Prefer this option when users only need the CLI and do not need local source edits. This avoids checkout directory and cleanup overhead.

pip install "git+https://github.com/Wangmerlyn/KeepGPU.git"

If SSH access is configured:

pip install "git+ssh://git@github.com/Wangmerlyn/KeepGPU.git"

ROCm variant from Git URL:

pip install "keep_gpu[rocm] @ git+https://github.com/Wangmerlyn/KeepGPU.git"

Option C: Install from a local source checkout (explicit path)

Use this option only when users already have a local checkout or plan to edit source.

git clone https://github.com/Wangmerlyn/KeepGPU.git
cd KeepGPU
pip install -e .

If the checkout already exists somewhere else, install by absolute path:

pip install -e /absolute/path/to/KeepGPU

For ROCm users from local checkout:

pip install -e ".[rocm]"

Verify installation:

keep-gpu --help

Command model

KeepGPU supports two execution modes.

Blocking mode (compatibility)

keep-gpu --gpu-ids 0 --vram 1GiB --interval 60 --busy-threshold 25

Use when users intentionally want one foreground process and manual Ctrl+C stop.

Non-blocking mode (recommended for agents)

keep-gpu start --gpu-ids 0 --vram 1GiB --interval 60 --busy-threshold 25
keep-gpu status
keep-gpu stop --all
keep-gpu service-stop

start auto-starts local service when unavailable.

Ctrl+C stops only foreground blocking runs. For service mode sessions started by keep-gpu start, use keep-gpu status, keep-gpu stop, and keep-gpu service-stop.

CLI options to tune:

  • --gpu-ids: comma-separated IDs (0, 0,1). If omitted, KeepGPU uses all visible GPUs.
  • --vram: VRAM to hold per GPU (512MB, 1GiB, or raw bytes).
  • --interval: seconds between keep-alive cycles.
  • --busy-threshold (--util-threshold alias): if utilization is above this percent, KeepGPU backs off.

Legacy compatibility:

  • --threshold is deprecated but still accepted.
  • Numeric --threshold maps to busy threshold.
  • String --threshold maps to VRAM.

Agent workflow

  1. Collect workload intent: target GPUs, hold duration, and whether node is shared.
  2. Choose mode:
    • blocking mode for manual shell sessions,
    • non-blocking mode for agent pipelines (default recommendation).
  3. Choose safe defaults when unspecified: --vram 1GiB, --interval 60-120, --busy-threshold 25.
  4. Provide command sequence with verification and stop command.
  5. For non-blocking mode, include status, stop, and daemon shutdown (service-stop).

Command templates

Single GPU while preprocessing (blocking):

keep-gpu --gpu-ids 0 --vram 1GiB --interval 60 --busy-threshold 25

All visible GPUs with lighter load (blocking):

keep-gpu --vram 512MB --interval 180

Agent-friendly non-blocking sequence:

keep-gpu start --gpu-ids 0 --vram 1GiB --interval 60 --busy-threshold 25
keep-gpu status
keep-gpu stop --job-id <job_id>
keep-gpu service-stop

Open dashboard:

http://127.0.0.1:8765/

Remote sessions (preferred: tmux for visibility and control):

tmux new -s keepgpu
keep-gpu --gpu-ids 0 --vram 1GiB --interval 300
# Detach with Ctrl+b then d; reattach with: tmux attach -t keepgpu

Fallback when tmux is unavailable:

nohup keep-gpu --gpu-ids 0 --vram 1GiB --interval 300 > keepgpu.log 2>&1 &
echo $! > keepgpu.pid
# Monitor: tail -f keepgpu.log
# Stop: kill "$(cat keepgpu.pid)"

Troubleshooting

  • Invalid --gpu-ids: ensure comma-separated integers only.
  • Allocation failure / OOM: reduce --vram or free memory first.
  • No utilization telemetry: ensure nvidia-ml-py works and nvidia-smi is available.
  • No GPUs detected: verify drivers, CUDA/ROCm runtime, and torch.cuda.device_count().

Example

User request: "Install KeepGPU from GitHub and keep GPU 0 alive while I preprocess."

Suggested response shape:

  1. Install: pip install "git+https://github.com/Wangmerlyn/KeepGPU.git"
  2. Run: keep-gpu start --gpu-ids 0 --vram 1GiB --interval 60 --busy-threshold 25
  3. Verify: keep-gpu status or dashboard http://127.0.0.1:8765/; stop session with keep-gpu stop --job-id <job_id> and daemon with keep-gpu service-stop.

Limitations

  • KeepGPU is not a scheduler; it only keeps already accessible GPUs active.
  • KeepGPU behavior depends on cluster policy; some schedulers require higher VRAM or tighter intervals.

Comments

Loading comments...