Pilot Announce Capabilities

v1.0.0

Broadcast structured capability manifests to the network. Use this skill when: 1. Advertising services, resources, or APIs your agent provides 2. Publishing...

0· 96·1 current·1 all-time
byCalin Teodor@teoslayer

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for teoslayer/pilot-announce-capabilities.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Pilot Announce Capabilities" (teoslayer/pilot-announce-capabilities) from ClawHub.
Skill page: https://clawhub.ai/teoslayer/pilot-announce-capabilities
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: pilotctl
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install pilot-announce-capabilities

ClawHub CLI

Package manager switcher

npx clawhub@latest install pilot-announce-capabilities
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description ask to broadcast capability manifests; runtime instructions only call pilotctl (the Pilot Protocol CLI) and reference the Pilot Protocol daemon/registry. Requiring pilotctl and the pilot-protocol skill is appropriate and proportional.
Instruction Scope
SKILL.md's commands focus on publishing/listening for capability manifests. Example workflow uses shell commands (pilotctl, cat, date, jq). It does not instruct reading arbitrary host files or secrets, but the example manifest includes node_id/hostname/endpoints which are potentially sensitive if populated automatically. Also the examples call jq but jq is not declared in the skill's required binaries list.
Install Mechanism
Instruction-only skill with no install spec or external downloads. Nothing is written to disk by the skill itself; risk from installation is minimal.
Credentials
No environment variables or credentials are requested, which fits the advertised functionality. However, broadcasting manifests can expose identifiers (hostname, node_id, endpoints, location, pricing) — this is expected for a publishing skill but is a privacy/operational consideration rather than a mismatch.
Persistence & Privilege
always:false and user-invocable:true (default) — the skill does not request permanent/system-wide presence or elevated privileges. Autonomous invocation is allowed by platform default but not a special attribute of this skill.
Assessment
This skill appears to do what it says: it uses pilotctl to publish capability manifests to the Pilot Protocol network. Before installing, check these points: 1) Ensure pilotctl is a trusted binary on your PATH and that the pilot daemon you connect to is the intended registry (publishing will make information public to that network). 2) The SKILL.md examples use jq but jq is not listed as a required binary — install/verify jq if you plan to run examples. 3) Be cautious about including internal identifiers (node_id, hostname, internal API endpoints, IPs, or other sensitive metadata) in manifests; remove or sanitize anything you don't want publicly discoverable. 4) Confirm the registry/target supports the mentioned port (1002) and that broadcasting pricing/SLA info is intended in your environment. If you want stronger assurance, inspect pilot-protocol and pilotctl implementations and test publishing to a private sandbox registry first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspilotctl
latestvk974feg2nnpx59jq94ecjtyjrn84f6t1
96downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

pilot-announce-capabilities

Broadcast structured capability manifests to the Pilot Protocol network. Advertise services, resources, APIs, pricing, and SLAs in a machine-readable format for rich service discovery.

Commands

Set capability tags

pilotctl --json set-tags tag1 tag2 tag3

Sets capability tags for your agent.

Publish capability manifest

pilotctl --json publish <target> "capabilities" --data "$(cat manifest.json)"

Publishes a structured JSON manifest to a target topic.

Subscribe to announcements

pilotctl --json subscribe <target> "capabilities"

Listens for capability announcements from a target.

List peer capabilities

pilotctl --json peers --search "tag1 tag2"

Finds agents by capability tags.

Capability Manifest Schema

{
  "agent": {
    "node_id": "0x12345678",
    "hostname": "ai-inference-01",
    "version": "1.4.1"
  },
  "capabilities": [
    {
      "type": "ai-inference",
      "model": "llama-3-70b",
      "context_length": 8192,
      "tokens_per_second": 120,
      "pricing": {
        "input_per_1m_tokens": 0.50,
        "output_per_1m_tokens": 1.50,
        "currency": "USD"
      },
      "sla": {
        "uptime_pct": 99.5,
        "max_latency_ms": 500
      }
    }
  ],
  "endpoints": {
    "api": "pilot://ai-inference-01:80/v1/chat/completions"
  },
  "metadata": {
    "location": "us-east-1",
    "gpu": "A100-80GB",
    "updated_at": "2026-04-08T10:30:00Z"
  }
}

Workflow Example

Advertise AI inference capability:

# Set basic capability tags
pilotctl --json set-tags ai inference llm

# Create detailed manifest
cat > capability_manifest.json <<EOF
{
  "capabilities": [{
    "type": "ai-inference",
    "model": "llama-3-70b",
    "tokens_per_second": 120,
    "pricing": {"input_per_1m_tokens": 0.50, "currency": "USD"},
    "sla": {"uptime_pct": 99.5, "max_latency_ms": 500}
  }],
  "metadata": {"gpu": "A100 80GB", "updated_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"}
}
EOF

# Publish manifest (assuming a registry or broadcast target)
REGISTRY=$(pilotctl --json find registry | jq -r '.address')
pilotctl --json publish "$REGISTRY" "capabilities" --data "$(cat capability_manifest.json)"

# Verify discoverability
pilotctl --json peers --search "ai llm"

Capability Types

  • ai-inference: Model, context length, tokens/sec, pricing
  • compute: CPU cores, RAM, GPU, pricing per hour
  • storage: Capacity, IOPS, protocols, pricing per GB
  • api-gateway: Protocols, rate limits, SSL, pricing per request

Dependencies

Requires pilot-protocol skill with running daemon. For event stream publishing, registry must support port 1002.

Comments

Loading comments...