Log Collector
WarnAudited by ClawScan on May 10, 2026.
Overview
This appears to be a real monitoring skill, but it would run persistently and use broad SSH/VPN access to collect logs from all configured nodes with unclear credential and data-handling controls.
Install only in a controlled cluster where you explicitly want recurring centralized log collection. Before enabling cron, define a node allowlist, use dedicated least-privilege SSH keys, re-enable host-key verification, restrict which logs are collected, protect logs.db, and confirm how to disable the scheduled job.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If enabled, the skill can automatically connect to every configured node and run log-reading commands, and disabled host verification can make mistaken or intercepted SSH connections easier.
The script selects every node from the database and runs remote SSH commands against reachable nodes. This fits log collection, but it is broad all-node tool execution and disables SSH host-key checking.
cursor.execute("SELECT * FROM nodes") ... subprocess.run(['ssh', '-o', 'ConnectTimeout=10', '-o', 'StrictHostKeyChecking=no', f'openclaw@{vpn_ip}', cmd], ...)Use an explicit node allowlist, keep SSH host-key checking enabled with known_hosts pinning, document the exact allowed commands, and require operator approval before enabling recurring collection.
Installing and scheduling this skill could grant it access to remote cluster nodes using whatever SSH identity is available on the host.
The code invokes SSH as the openclaw user without an explicit key path, so it may use the host's default SSH agent, config, or keys. This credential use is high-impact and not tightly bounded in the artifacts.
['ssh', '-o', 'ConnectTimeout=10', '-o', 'StrictHostKeyChecking=no', f'openclaw@{vpn_ip}', cmd]Require a dedicated least-privilege SSH key per node, declare the credential/config requirements in metadata, restrict the remote account to read-only log access, and avoid using default SSH identities.
Sensitive operational data from multiple nodes may be centralized in logs.db for 30 days and could later be exposed or over-trusted by other tools.
The collector gathers broad system and application logs and stores command output in the database. Logs can contain secrets, tokens, internal hostnames, errors, or untrusted text, and the artifacts do not show redaction or access controls.
log_commands = ["journalctl -n 500 --no-pager", "tail -n 200 /var/log/syslog 2>/dev/null || echo 'no syslog'", "tail -n 200 ~/.openclaw/logs/*.log 2>/dev/null || echo 'no openclaw logs'"] ... log_entry['output'][:10000]
Add secret redaction, path and log-type limits, database file permission guidance, encryption or access controls where appropriate, and clear rules for how agents may use stored log content.
Once cron is enabled, collection will keep running every three hours until the user disables it.
The skill explicitly documents a recurring cron job. This persistence is disclosed and aligned with monitoring, but it means the collector continues operating after setup.
Permanent log collection agent... Cron aktivieren (alle 3 Stunden) 0 */3 * * * /usr/bin/python3 /home/openclaw/.openclaw/workspace/skills/log-collector/scripts/log_collector.py
Only enable the cron job after reviewing the node list and SSH permissions, and document a clear disable/uninstall procedure.
Users have less ability to confirm where the skill came from or compare it with an upstream project.
The registry information provides no upstream source or homepage for provenance review. This is not malicious by itself, but it reduces independent verifiability for a high-privilege monitoring skill.
Source: unknown Homepage: none
Prefer installing high-privilege daemon skills from a known source repository with reviewable history and signed or pinned releases.
