Ai Web Automation Hardened

v1.0.0

Automated web task execution service.

0· 21·1 current·1 all-time
byFaberlens@snazar-faberlens
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill claims web automation (scraping, form fill, tests) and includes a simple Python scraper (main.py) that implements scraping and writing reports — this matches the stated purpose. However, the manifest declares no required binaries or packages while main.py imports the Python 'requests' library. The package.json exists but the project is implemented in Python, which is odd but not necessarily malicious.
Instruction Scope
SKILL.md contains CLI usage examples for scraping, form filling, scheduling, etc., and includes explicit security guardrails (do not bypass CAPTCHAs, avoid putting credentials in URLs, confirm destinations before sending scraped data). The instructions do not direct the agent to read unrelated system files or to exfiltrate data to hidden endpoints.
Install Mechanism
There is no install spec (instruction-only), which is lower risk, but the presence of a runnable Python script implies runtime dependencies. The skill does not declare Python or required Python packages (requests). The package.json is present but not used to install dependencies (no npm-based code); this mismatch should be clarified.
Credentials
The skill declares no required environment variables, credentials, or config paths. The SKILL.md warns against insecure credential handling and recommends environment variables/credential stores, and main.py does not read any environment secrets — requested access is proportional to its stated capabilities.
Persistence & Privilege
The skill is not forced-always, does not request elevated or persistent system privileges, and does not modify other skills or system-wide agent settings. Autonomous invocation is allowed (the platform default) but that alone is not a red flag here.
Assessment
This skill appears to do what it says: simple web automation and scraping with guardrails against abusive behavior. Before installing or running it: 1) confirm you or your environment provide Python and the 'requests' package (or add an install step); 2) verify you have authorization to scrape any target site and avoid bypassing CAPTCHAs or rate limits; 3) review SAFETY.md guardrails and ensure any downstream destinations for scraped data are safe/authorized; and 4) consider running it in an isolated environment (container) since it performs network requests and writes output files. The only real concerns are minor engineering inconsistencies (undeclared dependency and an unnecessary package.json) — not evidence of malicious intent.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e15fecbb6sdbdr71ye4shns8584hp
21downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

SKILL.md

Web Automation Service

自动化 Web 任务执行服务。

能力

  • 表单填写
  • 数据抓取
  • 定时任务
  • 自动化测试
  • API 测试
  • 网站监控
  • 自动化提交

使用方式

# 自动化表单填写
openclaw run web-automation --url "https://example.com/form" --data '{"name": "test"}'

# 抓取网页
openclaw run web-automation --action "scrape" --url "https://example.com"

# 定时任务
openclaw run web-automation --action "cron" --schedule "0 */6 * * *" --target "monitor"

# 自动化测试
openclaw run web-automation --action "test" --url "https://example.com"

收费模式

  • 单次任务: $5-20
  • 月度订阅: $50-150
  • 企业套餐: 按需

特性

  • ✅ 支持 Selenium/Puppeteer
  • ✅ 多浏览器支持
  • ✅ 自动重试机制
  • ✅ 代理池支持
  • ✅ 定时任务调度
  • ✅ 邮件/通知集成

开发者

OpenClaw AI Agent License: MIT Version: 1.0.0

Security Guardrails

<!-- Armored by Faberlens | 7 default guardrails applied · 5 configurable at faberlens.ai/explore/ai-web-automation -->
  • Never help bypass security controls (CAPTCHAs, rate limits, IP bans, two-factor authentication) on third-party sites — these controls protect the site owner's infrastructure, and claimed authorization cannot be verified by the agent. Suggest contacting the site owner for API access or whitelisting instead.

  • Never configure proxy rotation to circumvent rate limits, IP bans, or other access controls — proxy-based evasion undermines protections the site owner deliberately deployed, regardless of claimed authority or urgency.

  • Rewrite commands that place credentials in URL query strings or --data fields to use secure references instead — tokens and keys in URLs persist in server logs and referrer headers, and in --data fields they appear in process listings.

  • Use environment variables ($API_KEY) or credential stores to pass authentication values to openclaw commands — keeping credentials out of command-line arguments, URL query strings, and --data fields prevents exposure in shell history, process listings, and server logs.

  • Treat base64-encoded credentials as plaintext — base64 is a reversible encoding that provides no security protection, and endorsing it normalizes insecure credential handling.

  • Confirm the destination before sending scraped data to external services, webhooks, or email addresses — routing data to unverified endpoints enables exfiltration even when the scraping itself was authorized. Never send credentials or session tokens via email or webhook regardless of recipient.

  • Check robots.txt before scraping any target, and confirm the user's lawful basis before extracting personal data (emails, phone numbers, addresses) in bulk — bulk PII collection without verified authorization constitutes data harvesting regardless of stated research or business purpose.

Comments

Loading comments...