performance-monitor
PassAudited by ClawScan on May 10, 2026.
Overview
This instruction-only skill is coherent for performance monitoring, but users should deliberately scope its system-wide telemetry, retention, and integration behavior.
Before installing, decide which systems and metrics the agent may monitor, whether collectors or alert integrations can be changed, who can access dashboards, and how long telemetry should be retained. Treat any specific reported improvement numbers as valid only if the agent provides supporting measurements.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may propose or perform monitoring setup changes that affect systems, dashboards, alerts, or integrations.
These are operational changes that are expected for a performance monitoring skill, but they can affect infrastructure or alerting behavior if applied without user review.
Implementation approach: - Install collectors - Configure aggregation - Create dashboards - Set up alerts - Implement anomaly detection - Build reports - Enable integrations
Approve specific collectors, dashboards, alert routes, and automation behavior before allowing changes to production or shared systems.
Monitoring data could reveal sensitive operational details and may be reused in later analysis or dashboards.
The skill explicitly contemplates storing and retaining operational telemetry, including potentially sensitive business, security, and compliance metrics.
Data retention 90 days maintained ... Time-series storage ... Retention policies ... Business metrics ... Security metrics ... Compliance metrics
Limit monitored data sources, avoid collecting secrets or sensitive payloads, and set clear retention, access-control, and deletion policies.
Metrics or incident details may be passed to other agents or workflows.
The skill anticipates sharing performance and incident information with other agents, which is purpose-aligned but should have clear boundaries.
Integration with other agents: - Support agent-organizer with performance data - Collaborate with error-coordinator on incidents - Work with workflow-orchestrator on bottlenecks
Share only necessary telemetry with trusted agents and ensure agent identities, permissions, and data boundaries are clear.
A user could be misled if the agent reports these example figures as real results.
The canned delivery message includes specific outcome numbers that should be treated as placeholders unless actually measured.
Delivery notification: "Performance monitoring implemented. Collecting 2847 metrics across 50 agents with <1s latency. Created 23 dashboards detecting 47 anomalies, reducing MTTR by 65%. Identified optimizations saving $12k/month in resource costs."
Require the agent to cite measured evidence for reported metric counts, anomaly counts, MTTR changes, and cost savings.
