Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

gpu monitor

v1.0.0

Provides real-time NVIDIA GPU usage and memory stats, plus Ollama model layer GPU/CPU distribution via server.log parsing with live updates.

0· 85·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tinkerjueberg/gpu-monitor.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "gpu monitor" (tinkerjueberg/gpu-monitor) from ClawHub.
Skill page: https://clawhub.ai/tinkerjueberg/gpu-monitor
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install gpu-monitor

ClawHub CLI

Package manager switcher

npx clawhub@latest install gpu-monitor
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is implemented to monitor NVIDIA GPUs (calls nvidia-smi) and to optionally parse an Ollama server.log; those capabilities align with the name/description. Minor inconsistency: registry metadata lists no required binaries, but both SKILL.md and the code require nvidia-smi and Python. Requiring access to an Ollama server.log is coherent for the stated 'layer distribution' feature.
Instruction Scope
Runtime instructions and the code remain narrowly scoped: they run nvidia-smi, read a per-user config file (~/.openclaw/gpu_monitor_config.json) if present, and optionally tail/parse a user-specified Ollama server.log path (reads last ~50 lines). There are no network calls, remote endpoints, or attempts to read other system credentials. Note: parsing an arbitrary log file is potentially sensitive depending on what file the user points it at, but this behavior is directly tied to the declared feature.
Install Mechanism
No install spec or remote downloads are provided; code files are bundled with the skill. No external archives or untrusted URLs are fetched or extracted by the skill.
Credentials
The skill requests no environment variables or credentials. It does read a config file in the user's home (~/.openclaw/gpu_monitor_config.json) and will read any log path provided by the user. This file access is proportional to the feature set, but the registry metadata could more accurately declare the dependency on nvidia-smi and the optional config/log path.
Persistence & Privilege
The skill does not request permanent/always-on privileges (always:false) and does not modify other skills or system-wide settings. A small surprising behavior: entry.py contains a helper that writes an entry.py file (self-overwrite/creation inside the skill directory). This is limited to the skill's directory and not evidence of privilege escalation, but users may want to be aware that the package can write files to its own installation folder.
Assessment
This skill appears to do exactly what it says: local GPU monitoring via nvidia-smi and optional Ollama server.log parsing. Before installing: 1) Ensure you have an NVIDIA GPU and nvidia-smi available (SKILL.md requires this, but registry metadata omitted it). 2) Be careful what log path you supply—pointing the tool at arbitrary system logs could expose sensitive information; the skill will read the specified log file. 3) The package includes Python files that run locally and will write an entry.py file into the skill directory; inspect the files if you don't trust the unknown source. 4) Because the skill has no network behavior, running it locally is lower risk—still run it in a user account with appropriate file permissions and review the code if you want higher assurance.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c08ef9kpx801ptvtw4c9kj983qtby
85downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

GPU Monitor - Ollama Real-time GPU Monitoring Skill

Overview

This skill provides real-time GPU monitoring for local Ollama models. It monitors:

  • GPU name and memory usage with utilization percentage (e.g., 8.5/10.0 GB = 85%)
  • Model layer distribution (GPU vs CPU offloading) via Ollama server.log parsing
  • Live status updates every 2 seconds

⚠️ Framework Dependency: This skill is specifically designed for the Ollama framework (https://ollama.ai).
📝 Log Requirement: Requires access to Ollama's server.log file at a configurable path to parse model layer information.

Features

Ollama-specific monitoring: Automatically parses server.log for model info when available
Layer distribution tracking: Shows GPU layers, total layers, and CPU offload percentage
Memory visualization: Displays memory used/total with real-time utilization %
Cross-platform: Works on Windows/Linux/macOS with NVIDIA GPUs via nvidia-smi
Real-time updates: Configurable refresh interval (default: 2 seconds)
Flexible configuration: Specify Ollama log path via CLI --ollama-log=PATH or config file
Graceful degradation: Shows GPU metrics even without Ollama installed

Installation

# Via ClawHub
clawhub install gpu-monitor-skill

# Or manual clone
git clone <repository-url> ~/.openclaw/skills/gpu-monitor

Usage (Local Testing)

# Basic usage - monitors local GPU
python ~/.openclaw/clawhub/gpu-monitor-skill/gpu_monitor.py --interval=3

# With Ollama log path for layer tracking
python ~/.openclaw/clawhub/gpu-monitor-skill/gpu_monitor.py \
    --ollama-log="C:\Users\zugzwang\AppData\Local\Ollama\server.log" \
    --interval=2

# Using config file (create ~/.openclaw/gpu_monitor_config.json)
{
  "update_interval_seconds": 2,
  "ollama_log_path": "/path/to/server.log",
  "quiet_mode": false
}

Configuration

Create ~/.openclaw/gpu_monitor_config.json:

{
  "update_interval_seconds": 2,
  "ollama_log_path": "/path/to/Ollama/server.log",
  "quiet_mode": false
}
FieldTypeDescription
update_interval_secondsintRefresh interval (default: 2)
ollama_log_pathstringPath to Ollama server.log (optional)
quiet_modeboolDisable banner messages

Output Examples

With Ollama Layer Info

┌─[Update #1] 12:30:45
├─ GPU:         NVIDIA GeForce RTX 3080
├─ Memory Used: 8.5/10.0 GB (85.0%)
├─ Log Time:    [实时模式 - 无层数数据]
├─ GPU Layers:  [实时模式]

With Layer Data

┌─[Update #1] 12:31:02
├─ GPU:         NVIDIA GeForce RTX 3080
├─ Memory Used: 7.2/10.0 GB (72.0%)
├─ Log Time:    time=2026-03-27T12:31:02+08:00
├─ GPU Layers:  32 / 33
├─ CPU Layers:  1 (3.0%)

Without Ollama

┌─[Update #1] 12:32:15
├─ GPU:         NVIDIA GeForce RTX 3080
├─ Memory Used: 9.2/10.0 GB (92.0%)
├─ Log Time:    [实时模式 - 无层数数据]
├─ GPU Layers:  [实时模式]

Prerequisites

  • Python 3.7+
  • NVIDIA GPU with nvidia-smi available (Windows/Linux/macOS)
  • (Optional) Ollama server for layer tracking

License

MIT License

Comments

Loading comments...