Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

modly-image-to-3d

Desktop app that generates 3D models from images using local AI running entirely on your GPU

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 4 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description claim a local Electron+Python app that runs models on GPU; SKILL.md describes cloning a repo, npm + pip setup, and model weight downloads — all expected for this purpose. No unrelated environment variables or cloud credentials are requested.
!
Instruction Scope
Runtime instructions instruct the user/agent to install extensions from arbitrary GitHub repos and to run generator.py extension code and launcher scripts from the repository. Those extension scripts and launchers can execute arbitrary Python/JS code and access the filesystem. The doc does not mandate verifying extension code, nor does it require cryptographic verification of downloaded weights (manifest shows sha256 but the install flow does not require or describe verification).
Install Mechanism
There is no registry install spec (instruction-only), which lowers repository-level risk, but the dev/quick-start flow runs git clone, npm install, pip install -r requirements, and model weight downloads from external URLs (e.g., huggingface). This is proportionate to the function but raises supply-chain risk: npm/pip dependencies and remote weight downloads can be vectors for malicious code if sources are untrusted or checksums aren't verified.
Credentials
The skill declares no required environment variables, credentials, or config paths. The described behavior (local model inference, local file I/O) does not require cloud secrets. However, the ability to install arbitrary extensions increases the chance an extension will request unrelated secrets or network access at runtime.
Persistence & Privilege
always:false and defaults for invocation are normal. Autonomous invocation is allowed (platform default). Combined with the ability to fetch and run arbitrary extension code, autonomous invocation could increase the blast radius if untrusted extensions are installed; the skill itself doesn't request persistent elevated privileges.
What to consider before installing
This skill appears to be what it says (a local Electron+Python app that downloads model weights and runs generator scripts), but it depends on running code from cloned repositories and downloading model weights that can contain or trigger arbitrary code. Before installing or running: 1) only use official/trusted repositories; inspect launcher scripts and any generator.py or install scripts before executing them; 2) prefer releases signed or published by the project's official GitHub account; 3) verify model weight checksums (sha256) and prefer models hosted on reputable providers; 4) run initial installs inside a sandbox or VM and avoid running as an administrator/root; 5) review npm/pip dependency lists for suspicious packages; 6) if you need stronger assurance, request a packaged installer from a trusted source or ask the developer for reproducible build instructions and signed artifacts.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97bw7m15rbbmd59wwmckjt2p983af7q

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Modly Image-to-3D Skill

Skill by ara.so — Daily 2026 Skills collection.

Modly is a local, open-source desktop application (Windows/Linux) that converts photos into 3D mesh models using AI models running entirely on your GPU — no cloud, no API keys required.


Architecture Overview

modly/
├── src/                    # Electron + TypeScript frontend
│   ├── main/               # Electron main process
│   ├── renderer/           # React UI (renderer process)
│   └── preload/            # IPC bridge
├── api/                    # Python FastAPI backend
│   ├── generator.py        # Core generation logic
│   └── requirements.txt
├── resources/
│   └── icons/
├── launcher.bat            # Windows quick-start
├── launcher.sh             # Linux quick-start
└── package.json

The app runs as an Electron shell over a local Python FastAPI server. Extensions are GitHub repos with a manifest.json + generator.py that plug into the extension system.


Installation

Quick start (no build required)

# Windows
launcher.bat

# Linux
chmod +x launcher.sh
./launcher.sh

Development setup

# 1. Clone
git clone https://github.com/lightningpixel/modly
cd modly

# 2. Install JS dependencies
npm install

# 3. Set up Python backend
cd api
python -m venv .venv

# Activate (Windows)
.venv\Scripts\activate

# Activate (Linux/macOS)
source .venv/bin/activate

pip install -r requirements.txt
cd ..

# 4. Run dev mode (starts Electron + Python backend)
npm run dev

Production build

# Build installers for current platform
npm run build

# Output goes to dist/

Key npm Scripts

npm run dev        # Start app in development mode (hot reload)
npm run build      # Package app for distribution
npm run lint       # Run ESLint
npm run typecheck  # TypeScript type checking

Extension System

Extensions are GitHub repositories containing:

  • manifest.json — metadata and model variants
  • generator.py — generation logic implementing the Modly extension interface

manifest.json structure

{
  "name": "My 3D Extension",
  "id": "my-extension-id",
  "description": "Generates 3D models using XYZ model",
  "version": "1.0.0",
  "author": "Your Name",
  "repository": "https://github.com/yourname/my-modly-extension",
  "variants": [
    {
      "id": "model-small",
      "name": "Small (faster)",
      "description": "Lighter variant for faster generation",
      "size_gb": 4.2,
      "vram_gb": 6,
      "files": [
        {
          "url": "https://huggingface.co/yourorg/yourmodel/resolve/main/weights.safetensors",
          "filename": "weights.safetensors",
          "sha256": "abc123..."
        }
      ]
    }
  ]
}

generator.py interface

# api/extensions/<extension-id>/generator.py
# Required interface every extension must implement

import sys
import json
from pathlib import Path

def generate(
    image_path: str,
    output_path: str,
    variant_id: str,
    models_dir: str,
    **kwargs
) -> dict:
    """
    Required entry point for all Modly extensions.
    
    Args:
        image_path:  Path to input image file
        output_path: Path where output .glb/.obj should be saved
        variant_id:  Which model variant to use
        models_dir:  Directory where downloaded model weights live
    
    Returns:
        dict with keys:
            success (bool)
            output_file (str) — path to generated mesh
            error (str, optional)
    """
    try:
        # Load your model weights
        weights = Path(models_dir) / variant_id / "weights.safetensors"
        
        # Run your inference
        mesh = run_inference(str(weights), image_path)
        
        # Save output
        mesh.export(output_path)
        
        return {
            "success": True,
            "output_file": output_path
        }
    except Exception as e:
        return {
            "success": False,
            "error": str(e)
        }

Installing an extension (UI flow)

  1. Open Modly → go to Models page
  2. Click Install from GitHub
  3. Paste the HTTPS URL, e.g. https://github.com/lightningpixel/modly-hunyuan3d-mini-extension
  4. After install, click Download on the desired model variant
  5. Select the installed model and upload an image to generate

Official Extensions

ExtensionModel
modly-hunyuan3d-mini-extensionHunyuan3D 2 Mini

Python Backend API (FastAPI)

The backend runs locally. Key endpoints used by the Electron frontend:

# Typical backend route patterns (api/main.py or similar)

# GET /extensions         — list installed extensions
# GET /extensions/{id}    — get extension details + variants
# POST /extensions/install — install extension from GitHub URL
# POST /generate          — trigger 3D generation
# GET /generate/status    — poll generation progress
# GET /models             — list downloaded model variants
# POST /models/download   — download a model variant

Calling the backend from Electron (IPC pattern)

// src/preload/index.ts — exposing backend calls to renderer
import { contextBridge, ipcRenderer } from 'electron'

contextBridge.exposeInMainWorld('modly', {
  generate: (imagePath: string, extensionId: string, variantId: string) =>
    ipcRenderer.invoke('generate', { imagePath, extensionId, variantId }),

  installExtension: (repoUrl: string) =>
    ipcRenderer.invoke('install-extension', { repoUrl }),

  listExtensions: () =>
    ipcRenderer.invoke('list-extensions'),
})
// src/main/ipc-handlers.ts — main process handling
import { ipcMain } from 'electron'

ipcMain.handle('generate', async (_event, { imagePath, extensionId, variantId }) => {
  const response = await fetch('http://localhost:PORT/generate', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ image_path: imagePath, extension_id: extensionId, variant_id: variantId }),
  })
  return response.json()
})
// src/renderer/components/GenerateButton.tsx — UI usage
declare global {
  interface Window {
    modly: {
      generate: (imagePath: string, extensionId: string, variantId: string) => Promise<{ success: boolean; output_file?: string; error?: string }>
      installExtension: (repoUrl: string) => Promise<{ success: boolean }>
      listExtensions: () => Promise<Extension[]>
    }
  }
}

async function handleGenerate(imagePath: string) {
  const result = await window.modly.generate(
    imagePath,
    'modly-hunyuan3d-mini-extension',
    'hunyuan3d-mini-turbo'
  )

  if (result.success) {
    console.log('Mesh saved to:', result.output_file)
  } else {
    console.error('Generation failed:', result.error)
  }
}

Writing a Custom Extension

Minimal extension repository structure

my-modly-extension/
├── manifest.json
└── generator.py

Example: wrapping a HuggingFace diffusion model

# generator.py
import torch
from PIL import Image
from pathlib import Path

def generate(image_path, output_path, variant_id, models_dir, **kwargs):
    device = "cuda" if torch.cuda.is_available() else "cpu"
    weights_dir = Path(models_dir) / variant_id

    try:
        # Load model (example pattern)
        from your_model_lib import ImageTo3DPipeline
        
        pipe = ImageTo3DPipeline.from_pretrained(
            str(weights_dir),
            torch_dtype=torch.float16
        ).to(device)

        image = Image.open(image_path).convert("RGB")
        
        with torch.no_grad():
            mesh = pipe(image).mesh

        mesh.export(output_path)

        return {"success": True, "output_file": output_path}

    except Exception as e:
        return {"success": False, "error": str(e)}

Configuration & Environment

Modly runs fully locally — no environment variables or API keys needed. GPU/CUDA is auto-detected by PyTorch in extensions.

Relevant configuration lives in:

package.json          # Electron app metadata, build targets
api/requirements.txt  # Python dependencies for backend

If you need to configure the backend port or extension directory, check the Electron main process config (typically src/main/index.ts) for constants like API_PORT or EXTENSIONS_DIR.


Common Patterns

Check if CUDA is available in an extension

import torch

def get_device():
    if torch.cuda.is_available():
        print(f"Using GPU: {torch.cuda.get_device_name(0)}")
        return "cuda"
    print("No GPU found, falling back to CPU (slow)")
    return "cpu"

Progress reporting from generator.py

import sys
import json

def report_progress(percent: int, message: str):
    """Write progress to stdout so Modly can display it."""
    print(json.dumps({"progress": percent, "message": message}), flush=True)

def generate(image_path, output_path, variant_id, models_dir, **kwargs):
    report_progress(0, "Loading model...")
    # ... load model ...
    report_progress(30, "Processing image...")
    # ... inference ...
    report_progress(90, "Exporting mesh...")
    # ... export ...
    report_progress(100, "Done")
    return {"success": True, "output_file": output_path}

Adding a new page in the renderer (React)

// src/renderer/pages/MyPage.tsx
import React, { useEffect, useState } from 'react'

interface Extension {
  id: string
  name: string
  description: string
}

export default function MyPage() {
  const [extensions, setExtensions] = useState<Extension[]>([])

  useEffect(() => {
    window.modly.listExtensions().then(setExtensions)
  }, [])

  return (
    <div>
      <h1>Installed Extensions</h1>
      {extensions.map(ext => (
        <div key={ext.id}>
          <h2>{ext.name}</h2>
          <p>{ext.description}</p>
        </div>
      ))}
    </div>
  )
}

Troubleshooting

ProblemFix
npm run dev — Python backend not startingEnsure venv is set up: cd api && python -m venv .venv && pip install -r requirements.txt
CUDA out of memoryUse a smaller model variant or close other GPU processes
Extension install failsVerify the GitHub URL is HTTPS and the repo contains manifest.json at root
Generation hangsCheck that your GPU drivers and CUDA toolkit match the PyTorch version in requirements.txt
App won't launch on LinuxMake launcher.sh executable: chmod +x launcher.sh
Model download stallsCheck disk space; large models (4–10 GB) need adequate free space
torch not found in extensionEnsure PyTorch is in api/requirements.txt, not just the extension's own deps

Verifying GPU is detected

cd api
source .venv/bin/activate   # or .venv\Scripts\activate on Windows
python -c "import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'no GPU')"

Resources

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…