Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Alerting & Notification System

Centralized alerting and notification system for OpenClaw. Multi-channel alerts, intelligent rules, escalation, and audit.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 442 · 0 current installs · 0 all-time installs
byRhandus Malpica@rhanxerox
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description claim a centralized alert/notification system — the code implements that. However, the package is tightly bound to a specific organization (many hardcoded tiklick.* URLs and author identity) and includes behaviors beyond a simple alert library: creating system cron jobs, using sudo and chown, and writing integration files into other skill directories. Those actions are not documented as required permissions in the registry metadata and are not obviously necessary for a general-purpose alerting skill.
!
Instruction Scope
SKILL.md shows CLI usage and lists environment variables but the shipped scripts/instructions will read and write system paths (/var/log/openclaw_alerts, /etc/cron.d, /workspace/.openclaw_alerts.json) and create/modify files under other skills (/workspace/skills/api-testing and /workspace/skills/security-tools). The code also suggests monitoring arbitrary files like /var/log/auth.log and using curl and external endpoints — these expand the agent's access surface and scope beyond 'alerting' configuration.
!
Install Mechanism
There is no external install spec (no network download), which reduces supply‑chain concerns, but the included shell script uses sudo to create system folders and drops a cron file in /etc/cron.d. That implies elevated privileges and system persistence. Because these operations would run on the host if the integration script is executed, they are higher risk even though no remote download occurs.
!
Credentials
Registry metadata lists no required env vars, but the code reads several environment variables (TELEGRAM_CHAT_ID, GOOGLE_ACCOUNT, ADMIN_EMAIL and SKILL.md documents ALERTING_* vars). Required secrets/addresses are not declared in the metadata, and default values like 'CHANGE_ME' or hardcoded emails (rhandus@gmail.com, admin@tiklick.com) appear. This mismatch is a red flag: the skill will need credentials to send notifications but does not declare them up front.
!
Persistence & Privilege
The skill creates persistent system artifacts: cron job in /etc/cron.d, log directories under /var/log, and integration JS files placed into other skills' directories. Modifying other skills' codebase and creating system‑wide cron entries are significant privileges and increase risk of lasting or cross-skill impact. The skill is not marked always:true, but its code seeks persistent privileges if run.
What to consider before installing
This skill implements a plausible alerting system, but several things don't add up and raise risk: - Privileged host changes: The included shell script attempts to use sudo to create /var/log/openclaw_alerts, chown to a hardcoded user, and write a cron job to /etc/cron.d. Those operations require root and create persistent background activity on your host. - Modifies other skills: The integration functions write JavaScript files into other skills' directories (/workspace/skills/api-testing and /workspace/skills/security-tools). That means installing or running this skill can change other skills' behavior — a serious lateral-impact capability. - Undeclared credentials: The registry metadata declares no required env vars, yet code expects TELEGRAM_CHAT_ID, GOOGLE_ACCOUNT, ADMIN_EMAIL and SKILL.md documents ALERTING_* variables. The mismatch makes it unclear what secrets you'd need to supply and why. - Hardcoded organization targets: Many example monitors and cron jobs point to tiklick.* domains and a specific author/email. This suggests the package may be built for a particular company's infrastructure rather than generic use. - Potential for unexpected network activity: The code will monitor external endpoints and may invoke email/telegram sending commands; if you provide credentials, it will use them. The code also uses child_process.exec to run CLI commands (e.g., gog gmail), increasing attack surface. What to do before installing or running: 1. Do not run alert_integration.sh or any init/cron commands as root on production hosts until you review and adapt them. 2. Inspect and remove or sandbox the cron-installing code; prefer user-level scheduling or containerized deployment. 3. Remove or review the code that writes into other skills' directories; prefer explicit, opt-in integration steps rather than automatic modification. 4. Require the author to declare exactly which environment variables/credentials are needed in registry metadata, and validate they are used minimally. 5. Run the skill in an isolated environment (container or VM) first and verify behavior (what files it writes, what network endpoints it calls). 6. Replace hardcoded emails, usernames, and URLs with configurable parameters. 7. If you don't trust the author or the Tiklick ties, avoid granting credentials (Gmail/API tokens) and avoid running scripts that require sudo. If you want, I can produce a checklist of exact lines in the code to change/remove to make this safer (remove cron creation, stop writing to other skill dirs, explicit env var declarations, etc.).

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
alertsvk97ckhsve6287zt3cavb3rxeg181hbz3emailvk97ckhsve6287zt3cavb3rxeg181hbz3latestvk97ckhsve6287zt3cavb3rxeg181hbz3monitoringvk97ckhsve6287zt3cavb3rxeg181hbz3notificationsvk97ckhsve6287zt3cavb3rxeg181hbz3rhandusvk97ckhsve6287zt3cavb3rxeg181hbz3telegramvk97ckhsve6287zt3cavb3rxeg181hbz3tiklickvk97ckhsve6287zt3cavb3rxeg181hbz3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🚨 Clawdis

SKILL.md

Alerting & Notification System

Sistema centralizado de alertas y notificaciones para OpenClaw. Alertas multi-canal, reglas inteligentes, escalación y auditoría.

🎯 Objetivo

Permitir que OpenClaw sea proactivo en lugar de reactivo, detectando problemas y notificando automáticamente antes de que impacten operaciones.

📋 Características

Nivel 1 (Semana 1 - Base):

  • Multi-canal: Telegram, Email (Gmail), Log
  • Reglas básicas: Umbrales, patrones, horarios
  • Prioridades: Info, Warning, Critical, Emergency
  • Agrupación: Alertas relacionadas agrupadas
  • Historial: Auditoría completa de alertas

Nivel 2 (Semana 2 - Avanzado):

  • 🔄 Escalación automática: Si no hay respuesta
  • 📊 Dashboard web: Visualización en tiempo real
  • 🤖 Auto-resolución: Alertas que se resuelven solas
  • 📈 Análisis: Patrones y tendencias de alertas
  • 🔗 Integraciones: Webhooks, Slack, etc.

Nivel 3 (Semana 3 - Inteligente):

  • 🧠 Aprendizaje: Reduce falsos positivos
  • Horarios inteligentes: Respeta horas no laborales
  • 👥 Routing: Enruta a persona correcta
  • 📱 Mobile: Notificaciones push
  • 🔄 Feedback loop: Mejora continua

🚀 Uso

Comandos Principales:

alert monitor

Monitorea un endpoint o recurso.

# Monitorear API Tiklick
alert monitor https://api.tiklick.com/health --interval 60 --channel telegram

# Monitorear archivo de log
alert monitor /var/log/tiklick_app.log --pattern "ERROR\|CRITICAL" --channel email

# Monitorear métrica del sistema
alert monitor system.cpu --threshold 80 --duration 300 --channel both

alert threshold

Configura alertas basadas en umbrales.

# Ventas mínimas diarias
alert threshold /workspace/ventas.csv --column "total" --min 1000000 --channel email

# Uso máximo de disco
alert threshold system.disk --path /workspace --max 90 --channel telegram

# Tiempo respuesta API
alert threshold api.response_time --url https://api.tiklick.com --max 2000 --channel both

alert pattern

Busca patrones en logs o datos.

# Errores críticos en logs
alert pattern /var/log/app.log --pattern "FATAL\|SEGFAULT\|OutOfMemory" --channel telegram

# Intentos fallidos de login
alert pattern /var/log/auth.log --pattern "Failed password" --count 5 --window 300 --channel email

# Patrones de seguridad
alert pattern security --type "brute_force\|sql_injection\|xss" --channel both

alert status

Muestra estado de alertas.

# Alertas activas
alert status --active

# Historial de alertas
alert status --history --days 7

# Resumen estadístico
alert status --stats

alert resolve

Marca alertas como resueltas.

# Resolver alerta específica
alert resolve ALERT-1234

# Resolver todas de un servicio
alert resolve --service api-tiklick

# Auto-resolver después de verificación
alert resolve --auto --check "curl -s https://api.tiklick.com/health"

⚙️ Configuración

Canales Disponibles:

  1. telegram - Notificación inmediata a Telegram
  2. email - Email a lista configurada
  3. log - Registro en archivo de log
  4. dashboard - Visualización en dashboard web
  5. all - Todos los canales

Prioridades:

  • emergency (🔴) - Requiere acción inmediata
  • critical (🟠) - Acción requerida pronto
  • warning (🟡) - Atención recomendada
  • info (🔵) - Informativo solamente

Variables de Entorno:

ALERTING_TELEGRAM_CHAT_ID="${TELEGRAM_CHAT_ID}"  # Variable de entorno
ALERTING_EMAIL_RECIPIENTS="rhandus@gmail.com,admin@tiklick.com"
ALERTING_SMTP_SERVER="smtp.gmail.com"
ALERTING_DASHBOARD_URL="http://localhost:3000/alerts"
ALERTING_RETENTION_DAYS="30"

📊 Integración con Skills Existentes

Con API Testing:

# Si API falla, generar alerta
api test https://api.tiklick.com/health --on-failure "alert trigger api.down --priority critical"

Con Security Tools:

# Alertar hallazgos críticos de seguridad
security scan --on-finding-critical "alert trigger security.critical --details {finding}"

Con Docker Management:

# Alertar si contenedor cae
docker monitor tiklick-app --on-crash "alert trigger docker.crash --container {name}"

Con Calendar Integration:

# Recordatorios de eventos importantes
calendar monitor --before 30 --action "alert trigger calendar.reminder --event {title}"

🎯 Ejemplos para Tiklick

Caso 1: Monitoreo API Producción

# Configurar monitoreo 24/7
alert monitor https://api.tiklick.com/health \
  --interval 30 \
  --timeout 10 \
  --expected-status 200 \
  --on-failure "alert trigger api.production.down --priority emergency" \
  --on-recovery "alert resolve api.production.down" \
  --channel all

Caso 2: Ventas por Debajo de Umbral

# Verificar ventas cada hora
alert threshold /workspace/ventas/ultima_hora.csv \
  --column "total_ventas" \
  --min 500000 \
  --check-every 3600 \
  --on-below "alert trigger sales.low --priority warning --details 'Ventas bajas: {value}'" \
  --channel telegram,email

Caso 3: Backup Fallido

# Verificar backup diario
alert monitor /workspace/backups/latest.tar.gz \
  --max-age 86400 \
  --min-size 1000000 \
  --on-failure "alert trigger backup.failed --priority critical" \
  --channel email

Caso 4: Horario No Laboral (Silenciar)

# Solo alertas críticas fuera de horario
alert rule working-hours \
  --days mon-fri \
  --time 08:00-18:00 \
  --action "allow-all" \
  --else "allow-only critical,emergency"

🔧 Arquitectura

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Detección     │───▶│   Procesamiento │───▶│   Notificación  │
│  (Monitores)    │    │    (Reglas)     │    │   (Canales)     │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         ▼                       ▼                       ▼
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  APIs Tiklick   │    │  Agrupación     │    │  Telegram       │
│  Sistema        │    │  Escalación     │    │  Email          │
│  Logs           │    │  Deduplicación  │    │  Dashboard      │
│  Métricas       │    │  Priorización   │    │  Log            │
└─────────────────┘    └─────────────────┘    └─────────────────┘

📈 Métricas y Monitoreo

Métricas a Seguir:

  • Tiempo medio de detección: < 60 segundos
  • Tiempo medio de resolución: < 15 minutos
  • Falsos positivos: < 5%
  • Cobertura: > 95% de sistemas críticos
  • Satisfacción: > 4.5/5 en encuestas

Dashboard de Alertas:

  • Alertas activas por prioridad
  • Tendencia histórica
  • Top servicios con problemas
  • Tiempos de respuesta
  • Estadísticas de resolución

🛡️ Seguridad

  • Autenticación: Verificación de origen de alertas
  • Autorización: Quién puede configurar/ver alertas
  • Auditoría: Log completo de todas las acciones
  • Rate limiting: Prevenir spam de alertas
  • Cifrado: Datos sensibles cifrados

🔄 Mantenimiento

Diario:

  • Revisar alertas activas
  • Verificar canales de notificación
  • Limpiar alertas resueltas antiguas

Semanal:

  • Revisar reglas y ajustar umbrales
  • Analizar falsos positivos
  • Actualizar contactos de escalación

Mensual:

  • Auditoría completa del sistema
  • Revisión de métricas y KPIs
  • Plan de mejora continua

🚨 Plan de Implementación

Semana 1: Base (Actual)

  • Estructura del skill
  • Canal Telegram
  • Reglas básicas
  • Testing inicial

Semana 2: Avanzado

  • Canal Email
  • Dashboard web
  • Reglas avanzadas
  • Integración skills

Semana 3: Inteligente

  • Escalación automática
  • Aprendizaje automático
  • Mobile notifications
  • Optimización

Estado: 🟡 EN DESARROLLO (Semana 1)
Próximo hito: Canal Telegram funcional
Responsable: TK Claw
Fecha objetivo: 2026-02-26

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…