Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Daeva

v0.2.6

Use this skill whenever the user wants to interact with local or remote GPU pods for AI inference tasks. This includes transcribing audio (Whisper/speech-to-...

0· 116·0 current·0 all-time
byAsmo(deus) LeBot@asmolebot
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's described purpose (routing inference jobs to Daeva-managed GPU pods) matches the instructions in SKILL.md. However, the metadata claims no required binaries or env vars while the instructions explicitly rely on curl/HTTP access and reference DAEVA_URL/DAEVA_PORT and optionally Node.js and a 'daeva' binary. This mismatch is a coherence problem (expected binaries/env vars are not declared).
Instruction Scope
SKILL.md stays focused on the Daeva API (health checks, /jobs, proxy, lifecycle endpoints) which is appropriate. It also instructs the agent to read environment variables and to start the Daeva service if local (invoking 'daeva', 'node dist/src/cli.js', or 'systemctl --user start daeva'). Those lifecycle-start instructions expand the skill's runtime surface to executing system commands and managing services on the host — an expected capability for a pod orchestration skill but one that deserves explicit metadata and user consent.
Install Mechanism
This is an instruction-only skill with no install spec, which is low risk in itself. But SKILL.md references running 'daeva' and optionally Node.js; there is no install guidance or declared dependency for those binaries. The missing install/dependency declarations are an inconsistency (the skill may fail or require manual installation).
Credentials
The skill does not request secrets or multiple unrelated credentials. It references optional environment variables (DAEVA_URL, DAEVA_PORT) used solely to locate the Daeva service. There is no request for unrelated tokens or sensitive environment values.
Persistence & Privilege
The skill is not always-enabled and uses normal autonomous invocation defaults. It does instruct the agent to start/stop a local service, but it does not request persistent platform-level privileges or attempt to modify other skills' configs according to the provided materials.
What to consider before installing
This skill appears to implement what it claims (talking to a Daeva pod manager), but the SKILL.md expects local tools and actions that the registry metadata does not declare. Before installing: (1) confirm the agent environment actually has curl and, if you want MCP server control, Node.js and the 'daeva' binary; (2) be aware the skill's instructions include starting/stopping a local service (systemctl and running binaries) which requires appropriate permissions—don't grant those to untrusted agents; (3) verify you want the agent to be able to control shared infrastructure (Daeva is explicitly shared); (4) prefer installing this only in an isolated environment or ensure the skill's metadata is corrected to list required binaries and optional env vars. If the publisher can provide an install spec and explicit declared dependencies (curl, node, optional DAEVA_* env vars) and clarify service-control expectations, that would reduce the uncertainty.

Like a lobster shell, security has layers — review code before you run it.

latestvk97av9p154d2vrsjg1msma2q3584gd68
116downloads
0stars
4versions
Updated 1w ago
v0.2.6
MIT-0

Daeva — GPU Pod Orchestrator

Daeva routes AI inference jobs (transcription, image generation, OCR, vision) to GPU-backed pods via a REST API and optional MCP server. It handles pod lifecycle, exclusivity groups (automatic GPU contention resolution), and portable pod packages. Daeva can run on the same machine as the agent or on a remote host — the default is localhost, but this is just a fallback.

Resolving the Daeva Base URL

Daeva can run locally or on a remote host. Resolve the base URL using these steps in order:

  1. Check environment variables. If DAEVA_URL is set, use it as the full base URL (e.g. http://server.local:8787). If only DAEVA_PORT is set, use http://127.0.0.1:$DAEVA_PORT.
  2. Try the default. If neither variable is set, use http://127.0.0.1:8787.
  3. Verify with a health check. Hit /health on the resolved URL. If it returns {"ok":true}, proceed.
  4. If the health check fails and no env vars were set, ask the user where Daeva is hosted before continuing. Do not guess or retry blindly.
# Resolve base URL from environment, falling back to localhost default
DAEVA_BASE="${DAEVA_URL:-http://127.0.0.1:${DAEVA_PORT:-8787}}"

# Verify the service is reachable
curl -sf "$DAEVA_BASE/health"
# Expected: {"ok":true}

If the service is local and not running, start it:

# Foreground
daeva
# Or: PORT=8787 node dist/src/cli.js

# systemd
systemctl --user start daeva

All endpoints below use $DAEVA_BASE as the base URL. When constructing curl commands, MCP config, or downstream skill URLs, always substitute the resolved value — never hardcode 127.0.0.1 unless the agent is running on the same host as Daeva.

Important: Behavioral Rules

Daeva is a shared service. It is not per-user or per-session. Multiple agents and users may share the same Daeva instance. Treat it like shared infrastructure — don't make assumptions about what's running or why.

Use lifecycle endpoints for pod management. To wake, switch, or stop pods, use the dedicated lifecycle endpoints (/pods/:podId/activate, /pods/:podId/stop, /pods/swap). Never enqueue a dummy or throwaway job just to force a pod swap — that pollutes the job queue and may produce unwanted side effects on a shared service.

Route workload traffic through Daeva's proxy, not raw container ports. When Daeva is installed, downstream skills and clients (e.g. a ComfyUI skill, a Whisper client) should send requests through Daeva's proxy at $DAEVA_BASE/proxy/<podId> — not directly to the pod's container port. For example, if ComfyUI is managed by Daeva, the ComfyUI skill should hit $DAEVA_BASE/proxy/comfyapi instead of http://localhost:8188. This ensures Daeva can handle pod activation, exclusivity switching, and routing transparently. Only bypass the proxy if Daeva is confirmed to not be managing that pod.

Capabilities and Job Types

CapabilityJob TypeRequired Input
speech-to-texttranscribe-audiofilePath or url + contentType
image-generationgenerate-imageprompt
ocrextract-textfilePath or url
visiondescribe-imagefilePath or url

Built-in Pods

Pod IDCapabilitiesDescription
comfyapiimage-generation, visionComfyUI/comfyapi backend
whisperspeech-to-textWhisper transcription
ocr-visionocr, visionOCR and visual analysis

Submitting Jobs

Post JSON to /jobs with type and files (or legacy input field):

# Transcribe audio
curl -s -X POST $DAEVA_BASE/jobs \
  -H 'Content-Type: application/json' \
  -d '{"type":"transcribe-audio","files":[{"source":"path","path":"/tmp/audio.wav"}]}'

# Generate an image
curl -s -X POST $DAEVA_BASE/jobs \
  -H 'Content-Type: application/json' \
  -d '{"type":"generate-image","capability":"image-generation","input":{"prompt":"a red fox on a snowy mountain"}}'

# OCR
curl -s -X POST $DAEVA_BASE/jobs \
  -H 'Content-Type: application/json' \
  -d '{"type":"extract-text","capability":"ocr","input":{"filePath":"/tmp/document.png"}}'

After submitting, poll for completion and retrieve the result:

curl -s $DAEVA_BASE/jobs/<job-id>          # Job state
curl -s $DAEVA_BASE/jobs/<job-id>/result    # Job result when complete
curl -s $DAEVA_BASE/jobs                     # List all jobs

Pod Management

These endpoints control the full pod lifecycle — registering new pods, installing packages, and managing runtime state.

# List all registered pods and their runtime state
curl -s $DAEVA_BASE/pods

# Register a new pod from a manifest
curl -s -X POST $DAEVA_BASE/pods/register \
  -H 'Content-Type: application/json' \
  -d '{ ... pod manifest JSON ... }'

# Install a pod package by alias (e.g. "whisper")
curl -s -X POST $DAEVA_BASE/pods/create \
  -H 'Content-Type: application/json' \
  -d '{"alias":"whisper"}'

# List available aliases from the registry
curl -s $DAEVA_BASE/pods/aliases

# List already-installed packages
curl -s $DAEVA_BASE/pods/installed

# Activate (start) a specific pod
curl -s -X POST $DAEVA_BASE/pods/<podId>/activate

# Stop a specific pod
curl -s -X POST $DAEVA_BASE/pods/<podId>/stop

# Swap to a different pod (handles exclusivity group conflicts automatically)
curl -s -X POST $DAEVA_BASE/pods/swap \
  -H 'Content-Type: application/json' \
  -d '{"podId":"comfyapi"}'

Exclusivity groups: When two pods share the same GPU and can't run simultaneously, Daeva automatically stops the current pod and starts the target when you swap or submit a job that requires a different pod.

Pod Package Sources

Packages can be installed from multiple sources:

  • local-file — local directory containing a pod-package.json
  • github-repoowner/repo with optional ref and subpath
  • git-repo — arbitrary Git URL
  • uploaded-archive.tar.gz or .zip uploaded directly
  • registry-index — delegated lookup from a registry catalog

During install, Daeva runs package install hooks, creates declared host directories, and persists resolved host-path template variables (e.g. MODELS_DIR, INPUT_DIR).

Observability

Granular status endpoints for debugging and monitoring:

# Full combined status snapshot
curl -s $DAEVA_BASE/status

# Pod runtime state + container inspection
curl -s $DAEVA_BASE/status/runtime

# Installed packages + registry state
curl -s $DAEVA_BASE/status/packages

# Queue depth + exclusivity groups
curl -s $DAEVA_BASE/status/scheduler

# Recent job history
curl -s $DAEVA_BASE/status/jobs/recent

Use /status/runtime when a pod seems stuck — it includes container-level inspection. Use /status/scheduler to understand why a job is queued (often an exclusivity group conflict).

Complete API Reference

Core Endpoints

MethodPathPurpose
GET/healthLiveness check
GET/podsList pods and runtime state
POST/pods/registerRegister a new pod manifest
POST/pods/createInstall a pod package by alias
GET/pods/aliasesList registry aliases
GET/pods/installedList installed packages
POST/pods/:podId/activateStart or activate a pod
POST/pods/:podId/stopStop a pod
POST/pods/swapSwap to a target pod (server-side)
ALL/proxy/:podId/*Proxy requests to a pod's backend
POST/jobsSubmit an async job
GET/jobsList jobs
GET/jobs/:idGet job state
GET/jobs/:id/resultGet job result

Observability Endpoints

MethodPathPurpose
GET/statusCombined status snapshot
GET/status/runtimePod runtime + container inspection
GET/status/packagesInstalled packages + registry state
GET/status/schedulerQueue depth + exclusivity groups
GET/status/jobs/recentRecent job history

MCP Server Configuration

Daeva ships an MCP stdio server. The --base-url must point to the actual resolved Daeva URL — use $DAEVA_BASE, not a hardcoded localhost address (unless Daeva is genuinely local to the host running the MCP client).

{
  "mcpServers": {
    "daeva": {
      "command": "daeva-mcp",
      "args": ["--base-url", "http://server.local:8787"]
    }
  }
}

Replace http://server.local:8787 with the actual $DAEVA_BASE value for your environment. When the MCP server is configured, prefer using MCP tools over raw curl commands.

Troubleshooting

  • Connection refused on /health — Service not running. Start with daeva or systemctl --user start daeva.
  • Job stays queued — No pod registered for that capability, or an exclusivity conflict is blocking it. Check /pods and /status/scheduler.
  • Pod won't start — Check /status/runtime for container-level errors.
  • 404 alias not found — The alias doesn't exist in the registry. Check /pods/aliases for valid options.
  • Package install fails — Verify the source (local path, git URL, archive) is accessible. Check /status/packages for install state.

Comments

Loading comments...