Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Log Collector

v1.0.0

Permanent log collection agent. Collects logs and history from all nodes via SSH/VPN every 3 hours. Stores in logs.db with 30-day retention. Multi-node capab...

0· 58·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kikikari/log-collector.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Log Collector" (kikikari/log-collector) from ClawHub.
Skill page: https://clawhub.ai/kikikari/log-collector
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install log-collector

ClawHub CLI

Package manager switcher

npx clawhub@latest install log-collector
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (collect logs over SSH/VPN) aligns with the code: the included script pings nodes, runs ssh to gather journalctl/syslog and stores results in a local DB. However metadata omissions are problematic: the skill declares no required binaries or credentials even though the script relies on system tools (ssh, ping, journalctl, tail) and filesystem write access to /home/openclaw/.openclaw/workspace. The package.json marks the skill as a daemon, but there is no install spec to create/enable a daemon. These mismatches are disproportionate to the described purpose.
!
Instruction Scope
SKILL.md instructs the agent to run the Python script periodically (cron) and references helper scripts (ssh_connector.py, vpn_checker.py, retention_cleanup.py) that are not present in the file manifest — only log_collector.py exists. The runtime instructions and code will: read a nodes table from the workspace DB, attempt SSH to arbitrary VPN IPs, and copy potentially sensitive logs (~/.openclaw/logs/*, system logs). The script uses 'StrictHostKeyChecking=no' (accepts host keys automatically) and does not actually honor the SSH-key paths shown in SKILL.md/config (the code never passes -i or uses node['ssh_key']), which is an inconsistency and a security risk (MITM and lack of explicit key control). SKILL.md also says configuration (SSH keys, IPs) is not in the skill and 'must be in env/db' — relying on an external DB for node config increases blast radius if that DB is misconfigured or accessible.
Install Mechanism
There is no install spec (instruction-only), which minimizes supply-chain risk from downloads. However package.json indicates openclaw.daemon: true while SKILL.md suggests manual cron setup; this inconsistency is a packaging/convention issue rather than an immediate code-supply risk. Because files are included, installing will place scripts on disk and the SKILL.md recommends adding a cron entry — that gives the skill persistent execution if the administrator follows instructions.
!
Credentials
The skill requests no environment variables or declared credentials, yet it requires network access to remote nodes and filesystem read/write under /home/openclaw/.openclaw/workspace (DB, logs, schema). The nodes configuration (including SSH key paths) is expected to be stored outside the skill; the script, however, ignores the declared ssh_key fields and relies on the system's SSH defaults, which is inconsistent and surprising. The skill will collect arbitrary logs from remote machines (which may contain secrets) and store them locally — this capability is powerful and should have explicit, proportional credential/config declarations and guidance, which are missing.
Persistence & Privilege
The skill is not marked always:true and model invocation is allowed (default). The SKILL.md instructs adding a cron entry to run every 3 hours, creating persistent scheduled execution if an operator follows it. package.json's daemon:true suggests it was designed to run persistently, but the package lacks an automated install that would register a daemon. This is not necessarily malicious, but combined with the other concerns it increases the attack surface if installed and scheduled.
What to consider before installing
This skill mostly does what it says (SSH/VPN log collection) but several red flags mean you should not install it blindly: 1) The script depends on system binaries (ssh, ping, journalctl, tail) and filesystem write access to /home/openclaw/.openclaw/workspace but the skill declares none of these — verify those tools exist and the skill will run under an account with the correct, limited permissions. 2) SKILL.md references helper scripts that are missing; confirm expected files and DB schema (logs.db.schema.sql) are present before use. 3) The code ignores ssh_key entries from its configuration and uses 'StrictHostKeyChecking=no' — this accepts host keys automatically (MITM risk) and means the collector may use unanticipated SSH keys/agents; fix the code to explicitly use configured keys (-i) and enforce host key checking if you plan to use it. 4) The collector will fetch remote ~/.openclaw/logs and system logs that can contain secrets; ensure you trust the nodes and have appropriate retention and encryption for logs.db. 5) Test in an isolated/staging environment, review and possibly harden the script (honor ssh_key fields, validate inputs, constrain what files/commands are collected), and confirm where the nodes list and SSH credentials are stored and who can access them. If you cannot validate the origin or fix the issues above, avoid installing it on production systems.

Like a lobster shell, security has layers — review code before you run it.

agentsvk97fwfmc9preakn39tnjj5avgn852etvlatestvk97fwfmc9preakn39tnjj5avgn852etvlogsvk97fwfmc9preakn39tnjj5avgn852etvmonitoringvk97fwfmc9preakn39tnjj5avgn852etvsshvk97fwfmc9preakn39tnjj5avgn852etv
58downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Log Collector Sub-Agent

Permanenter Log-Sammel-Agent für alle Nodes im Cluster.

Aufgaben

IntervallAufgabeDetails
3 StundenNode-AbfrageSSH über VPN zu allen Nodes
3 StundenLog-CollectionSystem-Logs, OpenClaw-Logs, VPN-Status
3 StundenDatenbank-UpdateSchreibt in logs.db
TäglichRetention-CleanupLöscht Logs > 30 Tage

Multi-Node Support

NodeVerbindungPriorität
Node 1 (Gateway)LokalPrimär (Sammel-Node)
Node 2 (Netcup)SSH → 10.10.0.2Abfrage-Ziel
Node 3 (xNetX)SSH → 10.10.0.3Abfrage-Ziel
Node 4+SSH → 10.10.0.XVariable Erreichbarkeit

VPN-Priorität

1. Tailscale (primär) → schneller, stabil
2. WireGuard (fallback) → zuverlässiger Tunnel
3. SSH WAN (letzter Fallback) → langsamster

Datenbank: logs.db

Tabellen

TabelleInhalt
nodesBekannte Nodes mit VPN-IP, SSH-Keys
logsGesammelte Logs (max. 30 Tage)
ssh_connectionsVerbindungs-Log (Erfolg/Fehler)
vpn_statusTailscale/WireGuard Status
collection_runsAbfrage-Tracking pro Durchlauf
node_logs_rawRoh-Logs unverarbeitet

Retention

-- Automatisch: Logs > 30 Tage löschen
DELETE FROM logs WHERE retention_until < datetime('now');

Abfrage-Workflow

# Für jeden Node in nodes-Tabelle:
for node in nodes:
    # 1. VPN-Status prüfen
    vpn_status = check_vpn(node.tailscale_ip) or \
                 check_vpn(node.wireguard_ip)
    
    # 2. SSH-Verbindung versuchen
    if ssh_connect(node.ssh_key_path, node.vpn_ip):
        # 3. Logs abholen
        logs = ssh_exec('journalctl -n 1000')
        
        # 4. In logs.db speichern
        insert_logs(node.node_id, logs)
    else:
        # 5. Fehler loggen
        insert_ssh_error(node.node_id, "Connection failed")

Variable Erreichbarkeit

SzenarioVerhalten
Node immer erreichbarNormale Abfrage, vollständige Logs
Node manchmal erreichbarBest-effort, sammelt wenn möglich
Node nie erreichbarWird trotzdem versucht, Fehler geloggt
Gateway offlineLokale Buffer, später Übertragung

Berechtigungen

exec: read_write_ssh
nodes: query_all
logs: collect_store
retention: cleanup_30d

Konfiguration

{
  "log-collector": {
    "enabled": true,
    "collection_interval_hours": 3,
    "retention_days": 30,
    "nodes": [
      {"id": "node1", "local": true},
      {"id": "node2", "ssh_key": "~/.ssh/node2_key"},
      {"id": "node3", "ssh_key": "~/.ssh/node3_key"},
      {"id": "node4", "ssh_key": "~/.ssh/node4_key", "optional": true}
    ],
    "vpn_priority": ["tailscale", "wireguard", "wan"]
  }
}

Scripts

ScriptZweck
log_collector.pyHaupt-Collection-Logik
ssh_connector.pySSH-Verbindungen mit Fallback
vpn_checker.pyVPN-Status-Prüfung
retention_cleanup.py30-Tage Cleanup

Logs

logs/log-collector/
├── 2026-04-18.log
├── collection-errors.log
└── run-summary.json

Installation

# Skill installieren
clawhub install log-collector

# Cron aktivieren (alle 3 Stunden)
0 */3 * * * /usr/bin/python3 /home/openclaw/.openclaw/workspace/skills/log-collector/scripts/log_collector.py

Troubleshooting

Problem: SSH-Verbindung fehlgeschlagen

Prüfung:

# VPN erreichbar?
ping 10.10.0.2

# SSH-Key korrekt?
ssh -i ~/.ssh/node2_key openclaw@10.10.0.2

Problem: Logs nicht in Datenbank

Prüfung:

# logs.db existiert?
ls -la db/logs.db

# Fehler-Log
tail logs/log-collector/collection-errors.log

Integration

  • db-maintainer: Separate DB, gleiche Backup-Strategie
  • workspace-db: Dokumentations-Index (separat)
  • tree.db v2: Datei-Tracking (separat)

Hinweis: Konfiguration (SSH-Keys, IPs) nicht im Skill - muss in env/db konfiguriert werden.

Comments

Loading comments...