Logfile Analyzer

v1.0.0

Analyze application logs to produce actionable error digests with pattern detection, severity classification, trend analysis, and remediation recommendations...

0· 84·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for charlie-morrison/logfile-analyzer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Logfile Analyzer" (charlie-morrison/logfile-analyzer) from ClawHub.
Skill page: https://clawhub.ai/charlie-morrison/logfile-analyzer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install logfile-analyzer

ClawHub CLI

Package manager switcher

npx clawhub@latest install logfile-analyzer
Security Scan
Capability signals
CryptoRequires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included Python analyzer (scripts/analyze_logs.py). The script and SKILL.md focus on parsing log files, grouping errors, trend detection, and recommendations — all coherent with a logfile analyzer. No unrelated binaries, environment variables, or cloud credentials are requested.
Instruction Scope
Instructions direct the agent/user to run the included script against local log files or directories (examples use /var/log). This is expected for the stated purpose. Note: the tool will read any file paths you provide (logs can contain sensitive data), so feeding it arbitrary system paths grants it access to those files.
Install Mechanism
No install spec — instruction-only with an included pure-Python (stdlib) script. No downloads, no external package registry dependencies, and no archives to extract.
Credentials
The skill declares no required environment variables or credentials and the included script imports only stdlib modules. There are no obvious requests for unrelated secrets or external service tokens.
Persistence & Privilege
always:false and no install behavior that modifies other skills or system-wide settings. The skill does not request permanent presence or privileged agent-wide configuration.
Assessment
This appears to be a straightforward local log analyzer, but consider the following before installing or running it: 1) Logs often contain sensitive data (API keys, PII, auth tokens) — run the tool only on files you intend to analyze and preferably on a copy or in a sandbox. 2) Review the included script (scripts/analyze_logs.py) yourself — it runs locally and will read whatever paths you pass to it. 3) The skill has no provenance information (no homepage) and is sold/packaged by an unknown owner — if this matters for your environment, prefer tools from known sources or audit the code. 4) For automated runs (cron/CI), give the process least privilege (read-only access to specific log directories) and avoid sending raw outputs to external endpoints unless you control them.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fv8n1jf34a1wfk0vnfqqhfd84ttf4
84downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Log Analyzer

Parse application logs into actionable error digests with pattern grouping, severity classification, trend detection, and remediation recommendations.

Quick Start

# Analyze a single log file
python3 scripts/analyze_logs.py /var/log/app.log

# Analyze all logs in a directory
python3 scripts/analyze_logs.py /var/log/myapp/

# Last 24 hours only, errors and above
python3 scripts/analyze_logs.py /var/log/app.log --since 24h --severity error

# JSON output for programmatic use
python3 scripts/analyze_logs.py /var/log/app.log --output json

# Markdown report with trends
python3 scripts/analyze_logs.py /var/log/app.log --output markdown --trends

# Ignore noisy patterns
python3 scripts/analyze_logs.py /var/log/app.log --ignore "healthcheck" --ignore "GET /favicon"

Supported Formats (Auto-Detected)

  • JSON structured — Bunyan, Winston, Pino, structlog, any {"level": ..., "msg": ...} format
  • Syslog — RFC 3164 (Mar 28 02:31:00 host service: msg)
  • Apache/Nginx access — Combined log format
  • Nginx error2026/03/28 02:31:00 [error] ...
  • Python tracebacks — Multi-line traceback collection
  • Docker — ISO 8601 timestamps with container output
  • Generic timestamped[2026-03-28 02:31:00] LEVEL: message

Force format with --format <name> if auto-detection fails.

What It Does

  1. Parses log entries with format auto-detection
  2. Classifies severity (TRACE → DEBUG → INFO → WARN → ERROR → FATAL)
  3. Normalizes messages (replaces UUIDs, IPs, timestamps, paths with placeholders)
  4. Groups similar errors by fingerprint to find recurring patterns
  5. Ranks by severity and frequency
  6. Detects trends with --trends (hourly frequency buckets)
  7. Recommends fixes for 15+ known error patterns (OOM, connection refused, disk full, timeouts, SSL issues, rate limits, etc.)

Options

FlagDefaultDescription
--formatautoForce log format
--sinceallTime filter (1h, 24h, 7d, or ISO date)
--severitywarnMinimum severity to report
--top20Number of top patterns to show
--outputtextOutput format: text, json, markdown
--trendsoffShow hourly frequency trends
--ignorenoneRegex patterns to exclude (repeatable)
-qoffSummary only, skip individual entries

Exit Codes

  • 0 — No errors found
  • 1 — Errors found (warn/error level)
  • 2 — Fatal/critical entries found

Use in CI/CD pipelines to fail builds on log errors.

Workflow

Incident Investigation

  1. Run with --since 1h --severity error --trends to see recent errors with frequency
  2. Review top patterns — the most frequent errors are usually the root cause
  3. Check recommendations for known patterns
  4. Use --output json to feed into monitoring dashboards

Periodic Health Check

  1. Run with --since 24h --output markdown for a daily report
  2. Compare pattern counts across days to spot trends
  3. Set up as cron job for automated daily digests

Deep Dive

  1. Run with --severity debug to see full picture
  2. Use --ignore to filter out known noise
  3. Check references/error-patterns.md for detailed remediation steps on specific error types

Error Pattern Reference

For detailed remediation guidance on specific error types (memory, network, database, SSL, etc.), see references/error-patterns.md.

Comments

Loading comments...