Gradient Inference

PassAudited by ClawScan on May 1, 2026.

Overview

This skill appears to do what it says, but users should know it sends prompts to DigitalOcean Gradient using their API key and can optionally cache prompts.

This looks like a coherent DigitalOcean Gradient helper skill. Before installing, be comfortable giving it access to your Gradient API key and remember that prompts, system messages, and image prompts are sent to DigitalOcean; avoid `--cache` for sensitive content.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Anyone using the skill is allowing it to make Gradient inference requests with their API key.

Why it was flagged

The skill clearly requires a DigitalOcean Gradient API key and uses it for authenticated provider calls, which is expected for the integration but gives the skill access to billable inference usage.

Skill content
All requests need a **Model Access Key** in the `Authorization: Bearer` header.

```bash
export GRADIENT_API_KEY="your-model-access-key"
```
Recommendation

Use a dedicated or least-privileged Gradient model access key if available, monitor usage, and avoid sharing the key in prompts or logs.

What this means

Prompt text, system messages, and image prompts may leave the local environment and be processed by DigitalOcean’s inference service.

Why it was flagged

Chat messages and prompts are sent to the external DigitalOcean Gradient inference endpoint. This is core to the skill’s purpose and is disclosed, but it is still a data boundary users should notice.

Skill content
payload = {
            "model": model,
            "messages": messages,
            "temperature": temperature,
            "max_tokens": max_tokens,
        }

        resp = requests.post(CHAT_COMPLETIONS_URL, headers=headers, json=payload, timeout=60)
Recommendation

Do not send secrets, private documents, or regulated data unless that use is acceptable under your DigitalOcean account and data-handling requirements.

What this means

Cached prompts may be retained or reused by the provider for the intended prompt-caching behavior.

Why it was flagged

When the user enables caching through the Responses API, the script sets `store` to true, which is described by the skill as prompt caching for reuse/cost savings.

Skill content
if store:
            payload["store"] = True
Recommendation

Leave `--cache` off for sensitive prompts, confidential context, or one-off requests that should not be persisted for caching.