openclaw-serper

v3.1.1

Searches Google and extracts full page content from every result via trafilatura. Returns clean readable text, not just snippets. Use when the user needs web search, research, current events, news, factual lookups, product comparisons, technical documentation, or any question requiring up-to-date information from the internet.

1· 2.1k·3 current·3 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The script performs Google searches via the Serper API and then fetches & extracts full page content with trafilatura — exactly what the name/description claim.
Instruction Scope
SKILL.md restricts behavior to a single well-formed query (max two calls) and explicitly tells agents not to re-fetch URLs. The runtime script only reads a local .env (skill root) and performs network calls to Serper and the result pages; it does not read other system files.
Install Mechanism
No install spec; the skill is instruction+script only. There are no downloads, package installs, or archive extracts performed by the skill itself (trafilatura is a normal Python dependency).
!
Credentials
The code requires a Serper API key (SERPER_API_KEY or SERP_API_KEY) but the registry metadata listed 'Required env vars: none'. README suggests adding the key to ~/.openclaw/.env, while the script auto-loads only the skill's own .env. This mismatch (and ambiguous instructions about where to store the key) is an inconsistency you should clear up before use.
Persistence & Privilege
The skill does not request permanent/always-on inclusion and does not modify other skills or system-wide settings. It only auto-loads a .env file from its own skill directory.
What to consider before installing
This skill appears to do what it says (search via Serper and extract page text with trafilatura), but I found some inconsistencies you should address before installing: - Environment variable mismatch: The script requires SERPER_API_KEY (or SERP_API_KEY) but the published metadata claims no required env vars. Confirm you will supply a valid Serper API key and that you are comfortable storing it where the runtime will read it. - .env location ambiguity: README suggests ~/.openclaw/.env, but the script auto-loads only .env in the skill directory. Decide where you want the key stored (system-wide vs skill-local) and update the files accordingly. - Network behavior: The script will fetch arbitrary result URLs (concurrent, 3s timeout) and extract their full HTML text. This is necessary for the stated purpose, but be aware it means the agent runtime will make outbound requests to third-party sites and temporarily download page content. - Operational considerations: Ensure the runtime environment has trafilatura installed for the Python interpreter that will run the script, and that outbound network access to google.serper.dev and result sites is acceptable. Review Serper's rate limits and the safety of placing an API key where it will be read by the agent runtime. If you want to proceed, ask the skill owner to (1) update the published metadata to declare the required SERPER_API_KEY, and (2) clarify .env handling so you can safely provision the key. If those are corrected, the skill looks coherent and usable.

Like a lobster shell, security has layers — review code before you run it.

latestvk972psnkptca811bezrhwavf7s809zvz
2.1kdownloads
1stars
1versions
Updated 1mo ago
v3.1.1
MIT-0

Serper

Google search via Serper API. Fetches results AND reads the actual web pages to extract clean full-text content via trafilatura. Not just snippets — full article text.

Constraint

This skill already fetches and extracts full page content. Do NOT use WebFetch, web_fetch, WebSearch, browser tools, or any other URL-fetching/browsing tool on the URLs returned by this skill. The content is already included in the output. Never follow up with a separate fetch — everything you need is in the results.

Query Discipline

Craft ONE good search query. That is almost always enough.

Each call returns multiple results with full page text — you get broad coverage from a single query. Do not run multiple searches to "explore" a topic. One well-chosen query with the right mode covers it.

At most two calls if the user's request genuinely spans two distinct topics (e.g. "compare X vs Y" where X and Y need separate searches, or one default + one current call for different aspects). Never more than two.

Do NOT:

  • Run the same query with different wording to "get more results"
  • Run sequential searches to "dig deeper" — the full page content is already deep
  • Run one search to find something, then another to follow up — read the content you already have

Two Search Modes

There are exactly two modes. Pick the right one based on the query:

default — General search (all-time)

  • All-time Google web search, 5 results, each enriched with full page content
  • Use for: general questions, research, how-to, evergreen topics, product info, technical docs, comparisons, tutorials, anything NOT time-sensitive

current — News and recent info

  • Past-week Google web search (3 results) + Google News (3 results), each enriched with full page content
  • Use for: news, current events, recent developments, breaking news, announcements, anything time-sensitive

Mode Selection Guide

Query signalsMode
"how does X work", "what is X", "explain X"default
Product research, comparisons, tutorialsdefault
Technical documentation, guidesdefault
Historical topics, evergreen contentdefault
"news", "latest", "today", "this week", "recent"current
"what happened", "breaking", "announced", "released"current
Current events, politics, sports scores, stock pricescurrent

Locale

Default is global — no country filter, English results. This ONLY works for English queries.

You MUST ALWAYS set --gl and --hl when ANY of these are true:

  • The user's message is in a non-English language
  • The search query you construct is in a non-English language
  • The user mentions a specific country, city, or region
  • The user asks for local results (prices, news, stores, etc.) in a non-English context

If the user writes in German, you MUST pass --gl de --hl de. No exceptions.

ScenarioFlags
English query, no country target(omit --gl and --hl)
German query OR user writes in German OR targeting DE/AT/CH--gl de --hl de
French query OR user writes in French OR targeting France--gl fr --hl fr
Any other non-English language/country--gl XX --hl XX (ISO codes)

Rule of thumb: If the query string contains non-English words, set --gl and --hl to match that language.

How to Invoke

python3 scripts/search.py -q "QUERY" [--mode MODE] [--gl COUNTRY] [--hl LANG]

Examples

# English, general research
python3 scripts/search.py -q "how does HTTPS work"

# English, time-sensitive
python3 scripts/search.py -q "OpenAI latest announcements" --mode current

# German query — set locale + current mode for news/prices
python3 scripts/search.py -q "aktuelle Preise iPhone" --mode current --gl de --hl de

# German news
python3 scripts/search.py -q "Nachrichten aus Berlin" --mode current --gl de --hl de

# French product research
python3 scripts/search.py -q "meilleur smartphone 2026" --gl fr --hl fr

Output Format

The script streams a JSON array. The first element is metadata, the rest are results with full extracted content:

[{"query": "...", "mode": "default", "locale": {"gl": "world", "hl": "en"}, "results": [{"title": "...", "url": "...", "source": "web"}]}
,{"title": "Page Title", "url": "https://example.com", "source": "web", "content": "Full extracted page text..."}
,{"title": "News Article", "url": "https://news.com", "source": "news", "date": "2 hours ago", "content": "Full article text..."}
]
FieldDescription
titlePage title
urlSource URL
source"web", "news", or "knowledge_graph"
contentFull extracted page text (falls back to search snippet if extraction fails)
datePresent when available (news results always, web results sometimes)

CLI Reference

FlagDescription
-q, --querySearch query (required)
-m, --modedefault (all-time, 5 results) or current (past week + news, 3 each)
--glCountry code (e.g. de, us, fr, at, ch). Default: world
--hlLanguage code (e.g. en, de, fr). Default: en

Edge Cases

  • If trafilatura cannot extract content from a page, the result falls back to the search snippet.
  • Some sites block scraping entirely — the snippet is all you get.
  • If zero results are returned, the script exits with {"error": "No results found", "query": "..."}.
  • The Serper API key is loaded from .env in the skill directory. If missing, the script exits with setup instructions.

Comments

Loading comments...