Decodo Web Scraper
PassAudited by ClawScan on May 1, 2026.
Overview
This appears to be a legitimate Decodo scraping integration, but users should know it needs a Decodo token and sends requested URLs or searches to Decodo.
This skill is coherent with its stated purpose. Before installing, make sure you trust Decodo with the URLs and searches you submit, protect your DECODO_AUTH_TOKEN, avoid sensitive/private targets, and consider pinning dependencies if you deploy it in a production environment.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Invocations can use the user's Decodo API access and potentially consume account quota or billable usage.
The skill uses the DECODO_AUTH_TOKEN as a Basic auth credential when calling Decodo. This is required for the advertised service, but it gives the tool access to the user's Decodo API account context.
headers = {"Content-Type": "application/json", "Authorization": f"Basic {token}", "x-integration": "openclaw"}Use a dedicated or least-privilege Decodo token if available, keep it out of shared environments, and rotate it if exposed.
Search terms, video IDs, Reddit/Amazon URLs, or arbitrary webpage URLs submitted to the skill are shared with Decodo.
The script sends the selected target, URL, or query to Decodo's hosted scraping API. This external provider flow is disclosed and central to the skill's purpose.
SCRAPE_URL = "https://scraper-api.decodo.com/v2/scrape"
Avoid submitting confidential internal URLs, private search terms, or sensitive identifiers unless sharing them with Decodo is acceptable under your policies.
A downstream agent could be influenced by untrusted page text if it treats scraped content as instructions rather than data.
The skill returns external webpage content into the agent context. Retrieved web content can contain misleading or prompt-like text even though this retrieval is the skill's purpose.
Use this to get the content of a specific web page. By default the API returns content as **Markdown**
Treat scraped pages, search results, subtitles, and forum content as untrusted data; do not allow page text to override user or system instructions.
Future dependency changes could alter behavior or introduce dependency-level vulnerabilities.
The setup uses lower-bound dependency versions without a lockfile or hashes in the provided artifacts. This is common for small Python tools but means installed versions may vary over time.
requests>=2.28.0 python-dotenv>=1.0.0
For production use, install in an isolated environment and pin or lock dependency versions.
