Agent Metrics

v1.0.5

Monitor AI agent calls, errors, latency, and resource usage with a terminal dashboard and JSON export for observability and metrics tracking.

0· 887·4 current·4 all-time
byIvan Cetta@nantes

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for nantes/agent-metrics-osiris.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Agent Metrics" (nantes/agent-metrics-osiris) from ClawHub.
Skill page: https://clawhub.ai/nantes/agent-metrics-osiris
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install nantes/agent-metrics-osiris

ClawHub CLI

Package manager switcher

npx clawhub@latest install agent-metrics-osiris
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
metrics.py implements call/error/latency/resource tracking and a terminal dashboard, which aligns with the skill description. However, there are inconsistencies: the registry metadata lists no required binaries while SKILL.md metadata states it requires python and the psutil pip package; SKILL.md references a PowerShell wrapper (agent-metrics.ps1) that is not present in the file manifest; SKILL.md version (1.0.3) differs from the registry version (1.0.5). These mismatches don't imply malicious behavior but reduce trust in provenance.
Instruction Scope
Instructions stay within the expected scope (install psutil, run the CLI or the recommended PowerShell wrapper, record/view/export metrics). But SKILL.md tells the agent to use a PowerShell wrapper that isn't included and to run pip install psutil even though there is no install spec — the actions are appropriate for a local metrics tool, but the missing wrapper/file and metadata drift are concerning.
Install Mechanism
There is no formal install spec in the registry (instruction-only), and SKILL.md simply instructs users to run pip install psutil. That is a low-risk, common approach, but because installation is ad-hoc (manual pip) users should install dependencies from trusted sources and consider using a virtualenv.
Credentials
The skill requires no credentials or environment variables. It only reads/writes a local JSON file (agent_metrics.json) and queries local system metrics (psutil, disk usage '/'). No network calls or secrets are requested. Note: recorded errors may include stack traces which can inadvertently capture sensitive local information.
Persistence & Privilege
The skill does not request always: true, does not modify other skills or system-wide configurations, and does not persist beyond creating/updating its own metrics file in the working directory. Privilege footprint is limited to local file I/O and standard psutil queries.
Assessment
This skill appears to be a simple, local metrics CLI and is coherent with its stated purpose, but there are a few things to check before installing or running it: - Provenance and version: the SKILL.md and registry metadata disagree on versions and declared requirements, and a PowerShell wrapper referenced in the docs is not included. Verify the source and prefer an official repository or release before trust. - Dependency install: SKILL.md asks you to run `pip install psutil`. Install dependencies in a virtualenv and from PyPI using an account you trust. - Local data and privacy: the tool writes agent_metrics.json (and any export files) to the current working directory. Error records can include stack traces which may leak sensitive info. Inspect metrics files before sharing them externally. - File paths & permissions: by default it uses the current directory and disk usage('/') — if you need different locations or tighter permissions, modify the code or run it in an isolated environment (container or dedicated user) to limit exposure. - Missing wrapper: the docs mention agent-metrics.ps1 but the file isn't present. Use the included metrics.py directly or obtain the missing wrapper from a trusted source. If you need higher assurance, ask the publisher for a canonical repo/release, verify checksums, or run the script in an isolated test environment before integrating it with production agents.

Like a lobster shell, security has layers — review code before you run it.

agentvk979ehbf8m84bta5v258p9rsgx81m6cclatestvk97ee67c0st51vf7qj77bf5jhx81n5xcmetricsvk979ehbf8m84bta5v258p9rsgx81m6ccutilityvk979ehbf8m84bta5v258p9rsgx81m6cc
887downloads
0stars
6versions
Updated 2mo ago
v1.0.5
MIT-0

Agent Metrics Skill

Track and monitor your AI agent's behavior with built-in observability.

Files included:

  • metrics.py - Python CLI (cross-platform)
  • agent-metrics.ps1 - PowerShell wrapper (Windows)

What it does

  • Call Tracking - Count API calls, messages, tasks
  • Error Logging - Track errors with stack traces
  • Latency Metrics - Measure response times
  • Resource Usage - CPU, memory, network
  • Simple Dashboard - Terminal-based metrics view
  • Export - JSON export for external dashboards

Installation

# Install Python dependency
pip install psutil

Usage

Option 1: PowerShell (recommended on Windows)

.\agent-metrics.ps1 -Action record -MetricType call -Label "api_openai"

Option 2: Python CLI (cross-platform)

python metrics.py record --type call --label "api_openai"

Record an Error

.\agent-metrics.ps1 -Action record -MetricType error -Label "api_error" -Details "Rate limit exceeded"

Record Latency

.\agent-metrics.ps1 -Action record -MetricType latency -Label "task_process" -Value 1500

View Dashboard

.\agent-metrics.ps1 -Action dashboard

View Resource Usage (CPU, Memory, Disk)

.\agent-metrics.ps1 -Action resources

Export Metrics

.\agent-metrics.ps1 -Action export -Format json -Output metrics.json

Get Summary

.\agent-metrics.ps1 -Action summary

Metrics Types

TypeDescriptionFields
callAPI call madelabel, timestamp
errorError occurredlabel, details, timestamp
latencyResponse time (ms)label, value, timestamp
customCustom metriclabel, value

Dashboard Example

╔═══════════════════════════════════════════════╗
║           AGENT METRICS DASHBOARD            ║
╠═══════════════════════════════════════════════╣
║ Total Calls:     1,247                       ║
║ Total Errors:   23                          ║
║ Error Rate:     1.84%                        ║
║ Avg Latency:    234ms                        ║
║ Uptime:         4h 32m                      ║
╠═══════════════════════════════════════════════╣
║ Top Labels:                                  ║
║   api_openai      892 (71.5%)               ║
║   api_claude      234 (18.8%)               ║
║   task_process    121 (9.7%)                ║
╚═══════════════════════════════════════════════╝

Requirements

  • Python 3.8+
  • psutil library

License

MIT

Comments

Loading comments...