Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Server Mate

v1.3.3

Build or extend a lightweight server monitoring and AI operations workflow for Linux hosts running Nginx or Apache. Use when Codex needs to collect psutil me...

1· 131·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included scripts and docs: collector agent, report generator, webhook delivery, GeoIP handling, and guarded automation. The included Python files implement the advertised features (log parsing, SQLite rollups, report PDFs, webhooks, optional AI analysis and auto-ban/heal templates). The presence of iptables/systemctl command templates and GeoIP bootstrap logic is coherent with the 'auto-ban', 'auto-heal', and 'GeoIP provisioning' features.
Instruction Scope
SKILL.md explicitly recommends read-only collection by default, keeping artifacts local, and leaving automation in dry-run. However the agent supports (and will attempt) auto-detection of auth logs and has command_template execution paths for firewall and restart actions (guarded by config flags). The analyzer/report generator may call external OpenAI endpoints if ai_analysis is enabled and an API key is present. Operators should note that the agent can read configured system log paths (including /var/log/auth.log when auto-detected) and will transmit alerts/reports to operator-supplied webhooks or to OpenAI/Telegram when enabled.
Install Mechanism
No install spec; the skill is instruction + Python scripts. That lowers installation risk. Dependencies are standard Python packages (psutil, pyyaml, matplotlib, optional geoip libs) and the repo uses only typical stdlib networking and subprocess calls.
Credentials
No required environment variables are declared, which aligns with the skill being optional/locally configured. The code optionally reads TELEGRAM_BOT_TOKEN / TELEGRAM_CHAT_ID and OPENAI_API_KEY when features are enabled. Those variables are reasonable for the advertised integrations; they are optional and documented in SKILL.md/_meta.json. Operators should ensure webhook URLs and API keys are provided only when needed and kept secret in config or environment.
Persistence & Privilege
Skill is not force-included (always:false) and follows an opt-in automation model (automation.dry_run defaults to true, auto_ban/auto_heal default to disabled). The code can execute system commands via configurable templates (iptables, systemctl) but those paths are gated by configuration and documented safety checks. No evidence the skill tries to modify other skills or system-wide agent config automatically.
Assessment
This package is internally consistent with a server-monitoring tool, but review and control these items before deployment: - Automation: Keep automation.dry_run: true and auto_ban/auto_heal disabled until you've validated alerts, allowlists, cooldowns, audit logs, and rollback plans. The command templates (iptables, systemctl) will be executed if you enable automation. - Secrets & webhooks: Webhook URLs, Telegram tokens, and any OpenAI key are operator-supplied secrets. Store them securely and do not commit them to Git. The agent will send data to whatever webhook/API endpoints you configure. - GeoIP bootstrap: If no local MaxMind .mmdb is available the report generator will attempt a public mirror download (GEOIP_MIRROR_URL points at a GitHub-hosted mirror). Prefer provisioning GeoIP via your own MaxMind account and geoipupdate; treat public-mirror downloads as an operator-reviewed bootstrap only. - Log paths & scope: The agent can auto-detect auth logs (e.g., /var/log/auth.log or /var/log/secure) if configured that way; verify config.yaml paths to avoid unintentionally reading system logs you don't want processed. Running with defaults on a production host may require root privileges for some operations and to access protected log files. - Network egress: Enabling AI analysis or webhook channels will cause egress to third-party services (OpenAI, Telegram, DingTalk, Feishu, etc.). Audit the content you allow to be sent (raw or excerpted logs) and sanitize sensitive fields if needed. If you want increased assurance before installing: inspect the command_template strings in your config, run the agent in a sandbox with synthetic logs, and prefer local-only config paths (./data, ./logs, ./reports) until you're ready to connect real endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk97chnt90dvfcpc91tcgr9z2wn83tkv3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments