Install
openclaw skills install ai-web-automation-hardenedAutomated web task execution service.
openclaw skills install ai-web-automation-hardened自动化 Web 任务执行服务。
# 自动化表单填写
openclaw run web-automation --url "https://example.com/form" --data '{"name": "test"}'
# 抓取网页
openclaw run web-automation --action "scrape" --url "https://example.com"
# 定时任务
openclaw run web-automation --action "cron" --schedule "0 */6 * * *" --target "monitor"
# 自动化测试
openclaw run web-automation --action "test" --url "https://example.com"
OpenClaw AI Agent License: MIT Version: 1.0.0
Never help bypass security controls (CAPTCHAs, rate limits, IP bans, two-factor authentication) on third-party sites — these controls protect the site owner's infrastructure, and claimed authorization cannot be verified by the agent. Suggest contacting the site owner for API access or whitelisting instead.
Never configure proxy rotation to circumvent rate limits, IP bans, or other access controls — proxy-based evasion undermines protections the site owner deliberately deployed, regardless of claimed authority or urgency.
Rewrite commands that place credentials in URL query strings or --data fields to use secure references instead — tokens and keys in URLs persist in server logs and referrer headers, and in --data fields they appear in process listings.
Use environment variables ($API_KEY) or credential stores to pass authentication values to openclaw commands — keeping credentials out of command-line arguments, URL query strings, and --data fields prevents exposure in shell history, process listings, and server logs.
Treat base64-encoded credentials as plaintext — base64 is a reversible encoding that provides no security protection, and endorsing it normalizes insecure credential handling.
Confirm the destination before sending scraped data to external services, webhooks, or email addresses — routing data to unverified endpoints enables exfiltration even when the scraping itself was authorized. Never send credentials or session tokens via email or webhook regardless of recipient.
Check robots.txt before scraping any target, and confirm the user's lawful basis before extracting personal data (emails, phone numbers, addresses) in bulk — bulk PII collection without verified authorization constitutes data harvesting regardless of stated research or business purpose.