Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

modal-gpu

v0.1.0

Run Python code on cloud GPUs using Modal serverless platform. Use when you need A100/T4/A10G GPU access for training ML models. Covers Modal app setup, GPU...

0· 61·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lnj22/mhc-layer-impl-modal-gpu.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "modal-gpu" (lnj22/mhc-layer-impl-modal-gpu) from ClawHub.
Skill page: https://clawhub.ai/lnj22/mhc-layer-impl-modal-gpu
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install mhc-layer-impl-modal-gpu

ClawHub CLI

Package manager switcher

npx clawhub@latest install mhc-layer-impl-modal-gpu
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill's name and documentation describe Modal GPU usage (pip install modal, modal token set, using Modal functions and volumes). That capability aligns with the stated purpose. However, the SKILL metadata declares no required credentials or primary credential even though the runtime instructions explicitly instruct the user to set a Modal token (and describe using HuggingFace tokens/secrets for private data). This mismatch between declared requirements and runtime instructions is a design omission.
Instruction Scope
SKILL.md stays on-topic: it outlines creating Modal apps, building images, downloading data inside remote functions, and returning results. It does instruct running `modal token set --token-id <id> --token-secret <secret>` and references HF_TOKEN usage and Modal secrets. The instructions do not ask to read unrelated files or exfiltrate data, but they rely on secrets/credentials that are not declared in the metadata.
Install Mechanism
This is instruction-only with no install spec and no code files. The only installation guidance is `pip install modal`, which is appropriate and low risk for this purpose.
!
Credentials
The runtime docs require setting a Modal token (and show patterns for using a HuggingFace token for private datasets), yet the skill declares no required environment variables or primary credential. That omission hides that tokens/credentials are necessary and materially affect trust. Requesting tokens for the platform and optional dataset access is reasonable for the skill's function, but the metadata should declare them. Users should be aware they will need to provide secrets and should limit permissions.
Persistence & Privilege
always is false and there is no install writing persistent configuration. The skill does mention creating Modal secrets and volumes, which are normal Modal features. There is no indication the skill requests elevated platform privileges beyond normal Modal usage.
What to consider before installing
This skill appears to be what it claims (how to run training on Modal GPUs) but the metadata doesn't list the credentials the instructions use. Before installing or running: 1) Expect to provide a Modal token (and optionally a HuggingFace token for private datasets); only provide tokens you trust and scope them with least privilege. 2) Prefer creating Modal secrets (Modal's secret store) rather than pasting tokens into scripts. 3) Verify the skill's origin and review any code you run on Modal (especially user-provided scripts) for data exfiltration or unexpected network calls. 4) Test with non-sensitive/limited-permission tokens first and rotate tokens after testing. 5) If you need absolute assurance, ask the publisher to update the skill metadata to declare required credentials (e.g., PRIMARY_ENV or requires.env) so the requirements are explicit.

Like a lobster shell, security has layers — review code before you run it.

latestvk97brsdv9jjhq7dymh1jtmadqd84tdb2
61downloads
0stars
1versions
Updated 1w ago
v0.1.0
MIT-0

Modal GPU Training

Overview

Modal is a serverless platform for running Python code on cloud GPUs. It provides:

  • Serverless GPUs: On-demand access to T4, A10G, A100 GPUs
  • Container Images: Define dependencies declaratively with pip
  • Remote Execution: Run functions on cloud infrastructure
  • Result Handling: Return Python objects from remote functions

Two patterns:

  • Single Function: Simple script with @app.function decorator
  • Multi-Function: Complex workflows with multiple remote calls

Quick Reference

TopicReference
Basic StructureGetting Started
GPU OptionsGPU Selection
Data HandlingData Download
Results & OutputsResults
TroubleshootingCommon Issues

Installation

pip install modal
modal token set --token-id <id> --token-secret <secret>

Minimal Example

import modal

app = modal.App("my-training-app")

image = modal.Image.debian_slim(python_version="3.11").pip_install(
    "torch",
    "einops",
    "numpy",
)

@app.function(gpu="A100", image=image, timeout=3600)
def train():
    import torch
    device = torch.device("cuda")
    print(f"Using GPU: {torch.cuda.get_device_name(0)}")

    # Training code here
    return {"loss": 0.5}

@app.local_entrypoint()
def main():
    results = train.remote()
    print(results)

Common Imports

import modal
from modal import Image, App

# Inside remote function
import torch
import torch.nn as nn
from huggingface_hub import hf_hub_download

When to Use What

ScenarioApproach
Quick GPU experimentsgpu="T4" (16GB, cheapest)
Medium training jobsgpu="A10G" (24GB)
Large-scale traininggpu="A100" (40/80GB, fastest)
Long-running jobsSet timeout=3600 or higher
Data from HuggingFaceDownload inside function with hf_hub_download
Return metricsReturn dict from function

Running

# Run script
modal run train_modal.py

# Run in background
modal run --detach train_modal.py

External Resources

Comments

Loading comments...