OpenClaw Setup on AWS (Free Tier) - Memory Upgrade
WarnAudited by ClawScan on May 10, 2026.
Overview
This is a coherent setup guide, but it asks an agent to deploy a persistent assistant with broad cloud, credential, Google, and memory access without enough scoping or approval boundaries.
Review this carefully before installing. Use temporary or least-privilege credentials, verify AWS free-tier and monthly cost assumptions, approve each infrastructure and sudo step manually, pin or inspect remote installers, limit Google OAuth scopes, define memory retention/deletion rules, and make sure you know how to stop the 24/7 service and revoke all tokens.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A mistaken or overbroad agent action could alter the server, expose services, create costs, or misconfigure accounts.
This grants the agent broad execution authority over a cloud server and setup flow. Phase-level confirmation is present, but the artifacts do not show granular approval for sudo commands, network exposure, service changes, or rollback.
Collect what you need from them (API keys, preferences), then SSH into their server and run everything. Confirm before moving between phases.
Require explicit user approval before each cloud change, sudo command, security-group change, service enablement, and credential write. Keep a rollback checklist.
The assistant or setup agent could gain access to cloud resources, LLM billing keys, Telegram bot control, and Google data such as email, calendar, and Drive files.
The skill asks for high-impact credentials and account access. The visible artifacts do not clearly bound IAM roles, OAuth scopes, token storage, revocation steps, or least-privilege requirements.
AWS account access ... Anthropic API key ... Telegram account ... Groq API key ... OpenAI API key ... Google Workspace account
Use least-privilege IAM users, separate API keys with budgets/quotas, narrow Google OAuth scopes, and revoke unused credentials after setup.
If an upstream installer, package, or repository is compromised or changes unexpectedly, the server could run unreviewed code.
The setup uses a remote installer executed with sudo, and also describes global npm installation and git clone/build steps. This is expected for a server setup guide, but provenance and pinning are not described.
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
Verify install sources, pin package versions or commits where possible, review scripts before running them, and avoid running remote scripts as root without inspection.
Sensitive personal information may be stored long-term and later used to guide actions or recommendations, including if inaccurate or malicious content is remembered.
The assistant is designed to persist and reuse personal context over time. The artifacts do not clearly specify retention limits, deletion controls, exclusions, or safeguards against poisoned memories influencing future actions.
Persistent conversation history across sessions ... Automatic categorization of important information ... Searchable knowledge base of past interactions
Define what may be stored, how to delete memories, how long data is retained, and when the assistant must ask before using remembered information.
A user may share sensitive data believing it never leaves their server, even though external providers are part of the described workflow.
This privacy assurance is overbroad in context: the setup also configures Anthropic, optional OpenAI embeddings, Groq voice transcription, Telegram, and optional Google Workspace integrations, which can involve third-party data processing.
Your data stays on your server. Your AI works for you. Nobody else has access.
Clarify which data is sent to each provider, review each provider's retention policy, and avoid enabling integrations that are not needed.
The assistant may continue operating after setup and could take actions or send messages when the user is not actively supervising it.
The artifacts intentionally create a persistent, proactive assistant with background workers and restart behavior. That is purpose-aligned, but the provided text does not clearly define action limits, emergency stop controls, or approval requirements for autonomous tasks.
runs 24/7 ... can take actions on your behalf ... Background task workers for long-running projects ... Auto-restart on crashes
Set explicit autonomy limits, require confirmation for account-modifying actions, document how to stop/disable the service and cron jobs, and monitor logs after installation.
