Vultr Inference

v1.1.2

Generate images and text using Vultr Inference API. Supports Flux image generation and various LLMs for text. Use when user wants to generate images, artwork...

0· 177·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for happytreees/vultr-inference.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Vultr Inference" (happytreees/vultr-inference) from ClawHub.
Skill page: https://clawhub.ai/happytreees/vultr-inference
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install vultr-inference

ClawHub CLI

Package manager switcher

npx clawhub@latest install vultr-inference
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Vultr Inference) align with the code and SKILL.md: the script and examples call Vultr inference endpoints and download generated images. The only credential it needs is the Vultr API key stored in ~/.config/vultr/api_key, which is coherent with the stated purpose.
Instruction Scope
Overall instructions stay within scope (call inference endpoints, download images, list models). Minor inconsistencies: SKILL.md examples show using an environment variable ($VULTR_API_KEY) while the setup and included Python/script both read the API key from ~/.config/vultr/api_key; model naming in examples (e.g., flux.1-schnell, stable-diffusion-3.5-medium) differs from some defaults/options in the script (which uses short names like 'flux' and 'flux-pro'). These could cause confusion or errors but are not evidence of malice.
Install Mechanism
Instruction-only with a single helper script; there is no install spec or remote download. Nothing is written to system locations by an installer. Low-risk from installation perspective.
Credentials
The skill only accesses a single local file (~/.config/vultr/api_key) for the Vultr API key which is proportionate to its purpose. Caveats: the README also shows use of an env var ($VULTR_API_KEY) that is not declared as required; the skill expects the key stored in plain text, so users should protect file permissions and ensure the key has minimal privileges.
Persistence & Privilege
Skill does not request always:true, does not modify other skills or global settings, and does not persist beyond reading the API key and writing downloaded images to the current directory. Normal privileges for this kind of helper.
Assessment
This skill appears to do what it says: call Vultr Inference endpoints and download results. Before installing, consider: (1) the API key is read from ~/.config/vultr/api_key (plain text) — restrict that file's permissions (e.g., chmod 600) and ensure the key has only inference permissions; (2) SKILL.md examples also use $VULTR_API_KEY — decide which method you'll use and update docs if needed; (3) the script's default/allowed model names differ from some examples, so verify correct model IDs with your Vultr account if you see 400 errors; (4) generated images are written to the current directory — run the tool in a safe folder if you worry about overwriting files. If you want extra assurance, inspect the script locally (it is short and readable) before enabling autonomous invocation.

Like a lobster shell, security has layers — review code before you run it.

latestvk971da5kwxvdff91yzgacfa585834bd5
177downloads
0stars
4versions
Updated 1mo ago
v1.1.2
MIT-0

vultr-inference

Generate images and text using Vultr's Inference API.

Setup

Uses the same API key as Vultr Cloud API. Store it at:

~/.config/vultr/api_key

Image Generation

Available Models

ModelDescription
flux.1-devFLUX.1-dev - High quality
flux.1-schnellFLUX.1-schnell - Fast generation
stable-diffusion-3.5-mediumSD 3.5 Medium - Balanced

Generate Image

curl -X POST "https://api.vultrinference.com/v1/images/generations" \
  -H "Authorization: Bearer $VULTR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "flux.1-schnell",
    "prompt": "a hedgehog eating a burger in Amsterdam",
    "n": 1,
    "size": "1024x1024"
  }'

Parameters

ParameterTypeDescription
modelstringflux.1-dev, flux.1-schnell, stable-diffusion-3.5-medium
promptstringText description of image
nintNumber of images (1-4)
sizestring256x256, 512x512, 1024x1024
response_formatstringurl (default) or b64_json

Response

{
  "created": 1734567890,
  "data": [
    {
      "url": "https://ewr.vultrobjects.com/vultrinference-images/tmp_xxx.png"
    }
  ]
}

Text Generation (Chat Completions)

Available Models

  • llama-3.1-405b-instruct - Meta Llama 3.1 405B
  • llama-3.1-70b-instruct - Meta Llama 3.1 70B
  • llama-3.1-8b-instruct - Meta Llama 3.1 8B
  • mixtral-8x7b-32768 - Mixtral 8x7B
  • qwen-2-72b-instruct - Qwen 2 72B

Chat Completion

curl -X POST "https://api.vultrinference.com/v1/chat/completions" \
  -H "Authorization: Bearer $VULTR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-3.1-70b-instruct",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ],
    "max_tokens": 100
  }'

Parameters

ParameterTypeDescription
modelstringModel ID from list above
messagesarrayChat messages with role and content
max_tokensintMaximum tokens to generate
temperaturefloatRandomness (0-2, default 1)
streamboolStream response (default false)

Python Example

import os
import requests

API_KEY = open(os.path.expanduser("~/.config/vultr/api_key")).read().strip()

# Generate image
response = requests.post(
    "https://api.vultrinference.com/v1/images/generations",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "model": "flux.1-schnell",
        "prompt": "a hedgehog eating a burger",
        "size": "512x512",
        "n": 1
    }
)

result = response.json()
image_url = result["data"][0]["url"]
print(f"Image URL: {image_url}")

# Download image
img_response = requests.get(image_url)
with open("generated_image.png", "wb") as f:
    f.write(img_response.content)

List Available Models

curl -s "https://api.vultrinference.com/v1/models" \
  -H "Authorization: Bearer $VULTR_API_KEY" | jq

Troubleshooting

401 Unauthorized

  • Check API key is valid
  • Ensure key has inference permissions

400 Bad Request

  • Check model name is correct
  • Check size is valid (256x256, 512x512, 1024x1024)
  • Check prompt is not empty

Rate Limits

  • Default: 60 requests per minute
  • Contact support for higher limits

Comments

Loading comments...