Metrics Dashboard

v1.0.0

Track and visualize your agent's operational metrics. Record API calls, task completions, uptime, errors, and custom counters. Generate text-based dashboards...

0· 789·0 current·0 all-time
byArcSelf@trypto1019
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (metrics/dashboard) match the provided script and SKILL.md. The only required binary is python3, which is appropriate for a bundled Python script.
Instruction Scope
SKILL.md only instructs running the provided scripts to record, view, export, and display metrics. The runtime instructions and script operate on local files and do not request unrelated system files, environment variables, or external endpoints.
Install Mechanism
There is no install spec; the skill is instruction-only with a single Python script. That is proportionate for this functionality and minimizes installation risk.
Credentials
The skill requests no environment variables or credentials. The script only uses Path.home() and writes to ~/.openclaw/metrics, which is consistent with its stated purpose.
Persistence & Privilege
The skill is not always-enabled and does not modify other skills or system-wide settings. It persists data only in its own directory under the user's home.
Assessment
This skill appears internally consistent and runs entirely locally. Before installing: 1) be aware metrics are stored unencrypted in ~/.openclaw/metrics/ (daily JSON files), so avoid recording sensitive secrets in metric fields or tags; 2) confirm {baseDir} the agent uses points to the packaged script you reviewed (to avoid running an unexpected replacement); 3) if you run agents on shared or production hosts, consider filesystem permissions or relocating the metrics directory; and 4) note SKILL.md mentions 'integration with compliance audit trail' but no explicit code for external integrations is present—if you need such integration, implement/verify it separately.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📊 Clawdis
OSmacOS · Linux
Binspython3
latestvk975nb9fstykbx4k1d7q5ecke9816mp2
789downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0
macOS, Linux

Metrics Dashboard

Track your agent's operational health. Record events, count things, measure durations, and generate reports.

Why This Exists

Agents run 24/7 but have no way to answer basic questions: How many tasks did I complete? What's my error rate? How long do API calls take? Which skills do I use most? Without metrics, you're flying blind.

Commands

Record a metric

python3 {baseDir}/scripts/metrics.py record --name api_calls --value 1 --tags '{"provider": "openrouter", "model": "gpt-4"}'

Record a duration

python3 {baseDir}/scripts/metrics.py timer --name task_duration --seconds 12.5 --tags '{"task": "scan_skill"}'

Increment a counter

python3 {baseDir}/scripts/metrics.py counter --name posts_published --increment 1

Record an error

python3 {baseDir}/scripts/metrics.py error --name moltbook_verify_fail --message "Challenge solver returned wrong answer"

View dashboard

python3 {baseDir}/scripts/metrics.py dashboard

View metrics for today

python3 {baseDir}/scripts/metrics.py view --period day

View specific metric history

python3 {baseDir}/scripts/metrics.py view --name api_calls --period week

Export metrics

python3 {baseDir}/scripts/metrics.py export --format json > metrics.json
python3 {baseDir}/scripts/metrics.py export --format csv > metrics.csv

Dashboard Output

The text-based dashboard shows:

  • Uptime since first metric recorded
  • Total events today
  • Top metrics by count
  • Error rate
  • Average durations for timed operations
  • Custom counter values

Metric Types

  • counter — Things you count (posts published, skills scanned, comments made)
  • timer — Things you measure in seconds (API response time, task duration)
  • event — Things that happened (errors, deployments, restarts)
  • gauge — Current values (karma, budget remaining, queue depth)

Storage

Metrics are stored in ~/.openclaw/metrics/ as daily JSON files. Lightweight, no database required.

Integration

Works with the compliance audit trail — log metrics events alongside audit entries for full operational visibility.

Comments

Loading comments...