Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

aws-ecs-monitor

AWS ECS production health monitoring with CloudWatch log analysis — monitors ECS service health, ALB targets, SSL certificates, and provides deep CloudWatch...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.7k · 5 current installs · 5 all-time installs
byBrian Colinger@briancolinger
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description match the included scripts: both scripts call the AWS CLI to list/describe ECS services, CloudWatch logs, and ALB target groups and perform HTTP/SSL checks. However, the registry metadata declares no required environment variables or primary credential, while the scripts require ECS_CLUSTER and implicitly require AWS credentials (via the aws CLI). This is an incoherence: an ECS-monitor needs AWS credentials and at least ECS_CLUSTER declared.
!
Instruction Scope
SKILL.md instructs the agent to run the provided bash scripts which (a) call aws CLI to enumerate services, describe services/target groups, and pull CloudWatch logs, (b) perform HTTP probes with curl, and (c) write JSON reports to a local ./data directory. The instructions and scripts access AWS resources (logs, service metadata, target health) that require valid AWS credentials and specific IAM permissions but the skill metadata doesn't declare that. The scripts write to disk (./data) and will read environment variables; nothing else in the shown code appears to contact unknown external endpoints beyond user-configured domains for health checks.
Install Mechanism
No install spec is provided and the skill is instruction-plus-scripts only. No external archives, arbitrary downloads, or installers are executed by the skill itself — risk surface from installation is low. It does assume the presence of aws, curl, and python3 on PATH.
!
Credentials
The scripts require ECS_CLUSTER and assume AWS CLI credentials (e.g., via environment variables or AWS config) with permissions covering ECS, ELBv2, and CloudWatch Logs. The registry lists no required env vars and no primary credential. Declaring no primary credential is disproportionate and misleading — the skill will only function if the agent environment provides AWS credentials, and those credentials grant broad read access to cluster, ALB, and logs.
Persistence & Privilege
The skill does not request always:true and does not modify other skills or system-wide configurations. It writes output to a user-configurable local directory (default ./data) but does not request elevated or persistent platform privileges. Autonomous invocation is allowed (platform default) — note that with AWS credentials available the agent could run these scripts autonomously to query your account.
What to consider before installing
This skill appears to be a legitimate ECS health and log monitoring tool, but the package metadata is incomplete: it does not declare that ECS_CLUSTER is required nor that AWS credentials are needed. Before installing or running it, verify where and how the skill will get AWS credentials (environment vars, AWS config, or instance role) and ensure the credentials are least-privileged (read-only: ecs:ListServices/DescribeServices, elbv2:DescribeTargetGroups/DescribeTargetHealth, logs:FilterLogEvents/DescribeLogGroups). Also: run the scripts in a safe test account or isolated environment first to confirm behavior and to inspect full (untruncated) script contents for any data exfiltration or unexpected network calls. If you cannot provide controlled AWS credentials, do not enable this skill.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
latestvk976b2hz5445757fxczy51z8xs81fv9g

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binsaws, curl, python3
Any binopenssl

SKILL.md

AWS ECS Monitor

Production health monitoring and log analysis for AWS ECS services.

What It Does

  • Health Checks: HTTP probes against your domain, ECS service status (desired vs running), ALB target group health, SSL certificate expiry
  • Log Analysis: Pulls CloudWatch logs, categorizes errors (panics, fatals, OOM, timeouts, 5xx), detects container restarts, filters health check noise
  • Auto-Diagnosis: Reads health status and automatically investigates failing services via log analysis

Prerequisites

  • aws CLI configured with appropriate IAM permissions:
    • ecs:ListServices, ecs:DescribeServices
    • elasticloadbalancing:DescribeTargetGroups, elasticloadbalancing:DescribeTargetHealth
    • logs:FilterLogEvents, logs:DescribeLogGroups
  • curl for HTTP health checks
  • python3 for JSON processing and log analysis
  • openssl for SSL certificate checks (optional)

Configuration

All configuration is via environment variables:

VariableRequiredDefaultDescription
ECS_CLUSTERYesECS cluster name
ECS_REGIONNous-east-1AWS region
ECS_DOMAINNoDomain for HTTP/SSL checks (skip if unset)
ECS_SERVICESNoauto-detectComma-separated service names to monitor
ECS_HEALTH_STATENo./data/ecs-health.jsonPath to write health state JSON
ECS_HEALTH_OUTDIRNo./data/Output directory for logs and alerts
ECS_LOG_PATTERNNo/ecs/{service}CloudWatch log group pattern ({service} is replaced)
ECS_HTTP_ENDPOINTSNoComma-separated name=url pairs for HTTP probes

Directories Written

  • ECS_HEALTH_STATE (default: ./data/ecs-health.json) — Health state JSON file
  • ECS_HEALTH_OUTDIR (default: ./data/) — Output directory for logs, alerts, and analysis reports

Scripts

scripts/ecs-health.sh — Health Monitor

# Full check
ECS_CLUSTER=my-cluster ECS_DOMAIN=example.com ./scripts/ecs-health.sh

# JSON output only
ECS_CLUSTER=my-cluster ./scripts/ecs-health.sh --json

# Quiet mode (no alerts, just status file)
ECS_CLUSTER=my-cluster ./scripts/ecs-health.sh --quiet

Exit codes: 0 = healthy, 1 = unhealthy/degraded, 2 = script error

scripts/cloudwatch-logs.sh — Log Analyzer

# Pull raw logs from a service
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh pull my-api --minutes 30

# Show errors across all services
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh errors all --minutes 120

# Deep analysis with error categorization
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh diagnose --minutes 60

# Detect container restarts
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh restarts my-api

# Auto-diagnose from health state file
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh auto-diagnose

# Summary across all services
ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh summary --minutes 120

Options: --minutes N (default: 60), --json, --limit N (default: 200), --verbose

Auto-Detection

When ECS_SERVICES is not set, both scripts auto-detect services from the cluster:

aws ecs list-services --cluster $ECS_CLUSTER

Log groups are resolved by pattern (default /ecs/{service}). Override with ECS_LOG_PATTERN:

# If your log groups are /ecs/prod/my-api, /ecs/prod/my-frontend, etc.
ECS_LOG_PATTERN="/ecs/prod/{service}" ECS_CLUSTER=my-cluster ./scripts/cloudwatch-logs.sh diagnose

Integration

The health monitor can trigger the log analyzer for auto-diagnosis when issues are detected. Set ECS_HEALTH_OUTDIR to a shared directory and run both scripts together:

export ECS_CLUSTER=my-cluster
export ECS_DOMAIN=example.com
export ECS_HEALTH_OUTDIR=./data

# Run health check (auto-triggers log analysis on failure)
./scripts/ecs-health.sh

# Or run log analysis independently
./scripts/cloudwatch-logs.sh auto-diagnose --minutes 30

Error Categories

The log analyzer classifies errors into:

  • panic — Go panics
  • fatal — Fatal errors
  • oom — Out of memory
  • timeout — Connection/request timeouts
  • connection_error — Connection refused/reset
  • http_5xx — HTTP 500-level responses
  • python_traceback — Python tracebacks
  • exception — Generic exceptions
  • auth_error — Permission/authorization failures
  • structured_error — JSON-structured error logs
  • error — Generic ERROR-level messages

Health check noise (GET/HEAD /health from ALB) is automatically filtered from error counts and HTTP status distribution.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…