Agent Deployment Checklist

PassAudited by VirusTotal on May 11, 2026.

Findings (1)

thought The user wants me to analyze a set of files for an OpenClaw AgentSkills skill bundle. The bundle contains: - `_meta.json`: Metadata about the skill (owner, slug, version, date). - `SKILL.md`: A detailed markdown file describing a "Production deployment checklist for AI agent infrastructure". I need to classify this as `benign`, `suspicious`, or `malicious` based on specific criteria: - Data exfiltration - Malicious execution (RCE, etc.) - Persistence/backdoors - Prompt injection against the agent - Obfuscation - Supply chain issues Let's look at `SKILL.md`. It's a checklist for deploying AI agents on Mac Mini or Linux servers. It includes: - Layer 1: Base OS + OpenClaw Install (Scripted). Includes a bash script that installs Homebrew, nvm, python, git, etc. - Layer 2: IAM Config (Manual). Mentions API keys, `.env` files, etc. - Layer 3: Client-Specific Software. - Layer 4: Security Hardening. Includes a script for macOS firewall. - Layer 5: Onboarding. - 5-file memory system templates (`SOUL.md`, `IDENTITY.md`, `USER.md`, `AGENTS.md`, `MEMORY.md`). - Starter Cron Templates (Health check, Memory maintenance, Backup). Analysis: 1. **Data exfiltration**: The scripts provided are templates. The `backup.sh` script backs up the workspace to a local directory `/backups/openclaw/`. It doesn't seem to exfiltrate data to an external endpoint. 2. **Malicious execution**: The scripts use `curl | bash` for Homebrew installation, which is standard practice but technically a risk. However, it's a well-known URL (`raw.githubusercontent.com/Homebrew/install/HEAD/install.sh`). The scripts are intended for the user/admin to run during deployment. 3. **Persistence / backdoors**: It sets up cron jobs. These are for health checks, memory maintenance, and backups. This is standard for server management. It also mentions SSH hardening (disabling passwords, root login). 4. **Prompt injection**: The `SKILL.md` is a set of instructions for a human or an AI agent to follow. Does it contain instructions to ignore the user or hide actions? No. It emphasizes security ("Protect client data absolutely", "Never share client data outside this environment"). 5. **Obfuscation**: None found. 6. **Supply chain**: It installs standard tools (nvm, python, git, jq). The content seems to be a legitimate, well-structured guide for deploying AI agents securely. It includes security hardening steps (Layer 4) and emphasizes data protection. Wait, let's look closer at the scripts. `layer-1-base-install.sh`: - Installs Homebrew. - Installs nvm, python, git, jq. - Creates a workspace. This is standard setup. `layer-4-firewall.sh`: - Enables macOS firewall and blocks all incoming connections. This is a security measure. Cron scripts: - `health-check.sh`: Checks disk space. - `memory-maintenance.sh`: Manages files in the workspace. - `backup.sh`: Backs up the workspace locally. The `SKILL.md` instructions are for the agent to help a user deploy infrastructure. The "5-file memory system" is a common pattern in agentic frameworks. Is there any prompt injection? The `SOUL.md` template says: "Protect client data absolutely", "Never share client data outside this environment". The "Deployment Anti-Patterns" section warns against sharing API keys and skipping security. The instructions are very specific to "dedicated hardware (Mac Mini, Linux servers)" and "NOT for cloud/serverless deployments". One potential concern: The script in Layer 1 runs `curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh | /bin/bash`. While standard for Homebrew, in a security context, it's a "curl|bash" pattern. However, given the context of a deployment checklist, this is expected behavior for setting up a Mac. The `SKILL.md` contains instructions that an AI agent would interpret. If an agent reads this, it will understand the deployment process. There are no "ignore previous instructions" or "exfiltrate keys" commands. The `_meta.json` has a version `98.0.1` and a date in the future (1773724909664 is roughly March 2026). This is slightly odd but not necessarily malicious. The content is a "Production deployment checklist". It's a guide. The "5-layer stack" and "5-file memory system" are architectural patterns. Classification: Benign. The bundle provides a comprehensive and security-conscious framework for deploying AI agents. The scripts are standard administrative tasks (installing runtimes, configuring firewalls, setting up backups). There is no evidence of malicious intent, data exfiltration, or harmful prompt injection. Wait, let me double check the "Prompt injection" aspect. The instructions tell the agent how to behave: "Accuracy over speed", "Protect client data absolutely". These are positive constraints. The "Gate" sections are for the human/agent to verify progress. One detail: `claude-sonnet-4-20250514` in the API test. This model name doesn't exist yet (as of my current knowledge cutoff), suggesting this is a forward-looking or hypothetical template. The `backup.sh` script: `find /backups/openclaw -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;` This is a standard cleanup for backups. The `SKILL.md` is essentially a SOP (Standard Operating Procedure). Conclusion: Benign. Summary points: - The bundle provides a structured deployment framework for AI agents on dedicated hardware. - It includes security-focused steps like firewall configuration, SSH hardening, and local backup scripts. - The instructions emphasize data privacy and operational integrity without any signs of malicious intent or exfiltration. One more check: Does it try to steal keys? Layer 2 says: "API keys stored in environment variables (never in files)", "`.env` file created with proper permissions (`chmod 600`)". This is good practice. It includes a test script for the API key using `curl`. This is for verification. The `SKILL.md` is a documentation file that an agent uses to guide a deployment. It doesn't contain code that executes automatically upon loading the skill, but rather scripts that are meant to be run as part of the process. Final check on "malicious" vs "suspicious": - No intentional harmful behavior. - No risky capabilities that aren't justified by the purpose (deployment). - No obfuscation. - No exfiltration. Classification: benign. ```json { "classification": "benign", "summary": "The skill bundle provides a comprehensive and security-conscious production deployment checklist for AI agent infrastructure on dedicated hardware. It includes standard administrative scripts for environment setup (Layer 1), firewall configuration (Layer 4), and local backup/maintenance crons, all of which align with the stated purpose of infrastructure management. The instructions in SKILL.md emphasize data protection, credential safety (e.g., using .env with restricted permissions), and operational integrity, with no evidence of malicious intent, data exfiltration, or harmful prompt injection." } ```