Ai Devops Toolkit

PassAudited by ClawScan on May 1, 2026.

Overview

This appears to be a coherent local AI fleet observability skill, but users should notice that it asks them to install and run an external DevOps package and stores local telemetry logs.

Before installing, review the ollama-herd package and run it only in the local AI fleet environment you intend to monitor. Protect the ~/.fleet-manager telemetry files, keep the router/node endpoints on trusted networks, and stop the services when they are no longer needed.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing the package gives that external project code execution on the user’s machine when run.

Why it was flagged

The skill directs users to install a third-party PyPI package as a prerequisite. This is expected for the stated toolkit, but the package version is not pinned in the instruction.

Skill content
pip install ollama-herd
Recommendation

Review the package source, pin a trusted version if possible, and install it in a controlled Python environment.

What this means

A router or node process may continue listening locally and collecting/reporting fleet telemetry after setup.

Why it was flagged

The documented workflow starts router and node-agent style processes. This is purpose-aligned for fleet monitoring, but users should understand that these services may keep running until stopped.

Skill content
herd              # start the DevOps router ...
herd-node         # start on each DevOps-monitored node
Recommendation

Run the services only on intended hosts, confirm how to stop them, and avoid exposing their ports beyond trusted local or internal networks.

What this means

Request metadata, errors, tags, and usage information may remain on disk and could reveal details about local workloads.

Why it was flagged

The skill discloses persistent local storage of traces, latency history, usage stats, and structured logs. This fits the observability purpose, but it creates retained operational data.

Skill content
Everything in this DevOps observability layer is backed by SQLite at `~/.fleet-manager/latency.db` ... `herd.jsonl` ... daily rotation, 30-day retention
Recommendation

Check what fields are logged, protect the ~/.fleet-manager directory, and adjust retention or cleanup if the telemetry is sensitive.

What this means

If the router or node-agent endpoints are exposed outside a trusted boundary, operational telemetry could be visible to unintended clients.

Why it was flagged

The skill relies on communication between a local router and node agents. This is expected for fleet observability, but the visible instructions do not describe authentication or network exposure controls.

Skill content
router running at `http://localhost:11435` with one or more node agents reporting in
Recommendation

Keep the service bound to localhost or trusted interfaces, use firewall controls, and verify any authentication options in the Ollama Herd documentation.