Pilot Log Analytics Setup

v1.0.0

Deploy a log analytics system with 4 agents for collection, parsing, alerting, and visualization. Use this skill when: 1. User wants to set up centralized lo...

0· 80·0 current·0 all-time
byCalin Teodor@teoslayer

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for teoslayer/pilot-log-analytics-setup.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Pilot Log Analytics Setup" (teoslayer/pilot-log-analytics-setup) from ClawHub.
Skill page: https://clawhub.ai/teoslayer/pilot-log-analytics-setup
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: pilotctl, clawhub
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install pilot-log-analytics-setup

ClawHub CLI

Package manager switcher

npx clawhub@latest install pilot-log-analytics-setup
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description match the instructions: SKILL.md invokes pilotctl and clawhub to install and configure collector/parser/alerter/dashboard agents. The two required binaries (pilotctl, clawhub) are directly used by the provided commands and are reasonable for this setup.
Instruction Scope
Runtime instructions stay within the stated purpose (install skills, set hostname, write a manifest, perform handshakes, publish/subscribe sample events). They do instruct writing a setup manifest to ~/.pilot/setups/log-analytics.json and running pilotctl handshakes which establish trust between agents. Also note the skill installs other named skills (e.g., pilot-slack-bridge) — those downstream installs may introduce additional behaviors or credential prompts not covered here.
Install Mechanism
This is instruction-only (no install spec or code). The only installation step is 'clawhub install' for other skills; there are no direct downloads or archive extracts in this skill itself. Risk depends on clawhub and the packages it fetches, but the skill's own install surface is minimal.
Credentials
The skill declares no environment variables or credentials, which is proportional. However, some installed child skills (for example pilot-slack-bridge or webhook bridges) commonly require external tokens/credentials to operate — those are not requested here but may be required later when configuring those bridges.
Persistence & Privilege
always:false and user-invocable:true. The skill writes a manifest under the user's home (~/.pilot) which is expected for a local agent setup. It does not request persistent elevated privileges or modify other skills' configurations beyond creating its own manifest.
Assessment
This skill is coherent with its purpose, but review a few points before installing: 1) Ensure pilotctl and clawhub on your system are from trusted sources and up-to-date, since the skill relies on those tools. 2) Inspect the child skills that will be installed (pilot-stream-data, pilot-slack-bridge, etc.) — verify their origin and any credential requirements before providing tokens. 3) Be aware the setup writes manifests to ~/.pilot and uses pilotctl handshakes that establish trust between agents; only handshake with hosts you control. 4) Consider testing in an isolated environment or VM first to confirm the behavior of clawhub-installed components and any network interactions (especially bridges to Slack or external dashboards).

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspilotctl, clawhub
latestvk972sc9h8zg98sc4n4datjmqn985cexa
80downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Log Analytics Setup

Deploy 4 agents: collector, parser, alerter, and dashboard.

Roles

RoleHostnameSkillsPurpose
collector<prefix>-collectorpilot-stream-data, pilot-archive, pilot-compressAggregates logs from servers, containers, apps; normalizes formats
parser<prefix>-parserpilot-event-filter, pilot-task-router, pilot-datasetExtracts structured fields, parses stack traces, identifies patterns
alerter<prefix>-alerterpilot-alert, pilot-metrics, pilot-cronDetects log spikes, error rate anomalies, fires alerts
dashboard<prefix>-dashboardpilot-webhook-bridge, pilot-slack-bridge, pilot-announceSearch, visualization, drill-down, and report generation

Setup Procedure

Step 1: Ask the user which role this agent should play and what prefix to use.

Step 2: Install the skills for the chosen role:

# For collector:
clawhub install pilot-stream-data pilot-archive pilot-compress
# For parser:
clawhub install pilot-event-filter pilot-task-router pilot-dataset
# For alerter:
clawhub install pilot-alert pilot-metrics pilot-cron
# For dashboard:
clawhub install pilot-webhook-bridge pilot-slack-bridge pilot-announce

Step 3: Set the hostname:

pilotctl --json set-hostname <prefix>-<role>

Step 4: Write the setup manifest:

mkdir -p ~/.pilot/setups
cat > ~/.pilot/setups/log-analytics.json << 'MANIFEST'
<INSERT ROLE MANIFEST FROM BELOW>
MANIFEST

Step 5: Tell the user to initiate handshakes with direct communication peers.

Manifest Templates Per Role

collector

{
  "setup": "log-analytics", "setup_name": "Log Analytics",
  "role": "collector", "role_name": "Log Collector",
  "hostname": "<prefix>-collector",
  "description": "Aggregates logs from servers, containers, and applications. Normalizes formats.",
  "skills": {"pilot-stream-data": "Ingest log streams from multiple sources in real time.", "pilot-archive": "Archive raw logs for retention and forensic analysis.", "pilot-compress": "Compress high-volume log batches before transmission."},
  "peers": [{"role": "parser", "hostname": "<prefix>-parser", "description": "Receives raw normalized logs"}],
  "data_flows": [{"direction": "send", "peer": "<prefix>-parser", "port": 1002, "topic": "raw-log", "description": "Raw normalized logs from all sources"}],
  "handshakes_needed": ["<prefix>-parser"]
}

parser

{
  "setup": "log-analytics", "setup_name": "Log Analytics",
  "role": "parser", "role_name": "Log Parser",
  "hostname": "<prefix>-parser",
  "description": "Extracts structured fields, parses stack traces, identifies error patterns.",
  "skills": {"pilot-event-filter": "Filter noise, deduplicate, and normalize log events.", "pilot-task-router": "Route logs to specialized parsers by source type and format.", "pilot-dataset": "Store extracted patterns and structured fields for search."},
  "peers": [{"role": "collector", "hostname": "<prefix>-collector", "description": "Sends raw logs"}, {"role": "alerter", "hostname": "<prefix>-alerter", "description": "Receives parsed events"}],
  "data_flows": [{"direction": "receive", "peer": "<prefix>-collector", "port": 1002, "topic": "raw-log", "description": "Raw normalized logs from all sources"}, {"direction": "send", "peer": "<prefix>-alerter", "port": 1002, "topic": "parsed-event", "description": "Parsed events with structured fields and severity"}],
  "handshakes_needed": ["<prefix>-collector", "<prefix>-alerter"]
}

alerter

{
  "setup": "log-analytics", "setup_name": "Log Analytics",
  "role": "alerter", "role_name": "Anomaly Alerter",
  "hostname": "<prefix>-alerter",
  "description": "Detects log spikes, error rate anomalies, and novel error patterns. Fires alerts.",
  "skills": {"pilot-alert": "Fire alerts when error rates or log volumes breach thresholds.", "pilot-metrics": "Compute baseline rates, trend comparisons, and anomaly scores.", "pilot-cron": "Run scheduled anomaly scans over rolling time windows."},
  "peers": [{"role": "parser", "hostname": "<prefix>-parser", "description": "Sends parsed events"}, {"role": "dashboard", "hostname": "<prefix>-dashboard", "description": "Receives anomaly alerts"}],
  "data_flows": [{"direction": "receive", "peer": "<prefix>-parser", "port": 1002, "topic": "parsed-event", "description": "Parsed events with structured fields"}, {"direction": "send", "peer": "<prefix>-dashboard", "port": 1002, "topic": "anomaly-alert", "description": "Anomaly alerts with context and baseline comparisons"}],
  "handshakes_needed": ["<prefix>-parser", "<prefix>-dashboard"]
}

dashboard

{
  "setup": "log-analytics", "setup_name": "Log Analytics",
  "role": "dashboard", "role_name": "Log Dashboard",
  "hostname": "<prefix>-dashboard",
  "description": "Provides search, visualization, and drill-down into log data. Generates reports.",
  "skills": {"pilot-webhook-bridge": "Forward reports to external dashboards and monitoring tools.", "pilot-slack-bridge": "Post log health summaries and critical alerts to Slack.", "pilot-announce": "Broadcast periodic log health reports to subscribers."},
  "peers": [{"role": "alerter", "hostname": "<prefix>-alerter", "description": "Sends anomaly alerts"}],
  "data_flows": [{"direction": "receive", "peer": "<prefix>-alerter", "port": 1002, "topic": "anomaly-alert", "description": "Anomaly alerts with context and baselines"}, {"direction": "send", "peer": "external", "port": 443, "topic": "log-report", "description": "Log reports to dashboards and Slack channels"}],
  "handshakes_needed": ["<prefix>-alerter"]
}

Data Flows

  • collector -> parser : raw-log events from all sources (port 1002)
  • parser -> alerter : parsed-event with structured fields and severity (port 1002)
  • alerter -> dashboard : anomaly-alert with context and baselines (port 1002)
  • dashboard -> external : log-report via webhooks and Slack (port 443)

Handshakes

# collector <-> parser:
pilotctl --json handshake <prefix>-parser "setup: log-analytics"
pilotctl --json handshake <prefix>-collector "setup: log-analytics"
# parser <-> alerter:
pilotctl --json handshake <prefix>-alerter "setup: log-analytics"
pilotctl --json handshake <prefix>-parser "setup: log-analytics"
# alerter <-> dashboard:
pilotctl --json handshake <prefix>-dashboard "setup: log-analytics"
pilotctl --json handshake <prefix>-alerter "setup: log-analytics"

Workflow Example

# On collector -- publish raw log:
pilotctl --json publish <prefix>-parser raw-log '{"source":"nginx-prod-01","level":"error","message":"upstream timed out"}'
# On parser -- publish parsed event:
pilotctl --json publish <prefix>-alerter parsed-event '{"pattern_id":"NGINX-TIMEOUT-001","level":"error","occurrences_1h":47}'
# On alerter -- publish anomaly alert:
pilotctl --json publish <prefix>-dashboard anomaly-alert '{"alert_id":"ALR-7829","type":"error_spike","severity":"critical","current_rate":47,"baseline_rate":3}'
# On dashboard -- publish log report:
pilotctl --json publish <prefix>-dashboard log-report '{"period":"2026-04-09T15:00Z/PT1H","errors":1290,"anomalies_detected":2}'

Dependencies

Requires pilot-protocol skill, pilotctl binary, clawhub binary, and a running daemon.

Comments

Loading comments...