Serper Search
v1.0.0Search Google programmatically using the Serper.dev API. Returns structured organic results (title, URL, snippet, domain) for any query. Use when you need to...
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
Name/description (programmatic Google search via Serper.dev) align with the instructions and example code, which call serper.dev endpoints and return organic results. Requesting an API key (SERPER_API_KEY) is expected for this purpose.
Instruction Scope
SKILL.md only instructs HTTP calls to Serper.dev endpoints and uses the SERPER_API_KEY environment variable. It includes lead-generation examples and uses the Places endpoint (which returns phone numbers/websites) — collecting such contact data is within the declared purpose but is PII-relevant and you should ensure your use complies with laws/policies.
Install Mechanism
Instruction-only skill (no install spec), so nothing is written to disk by the skill itself. However the examples import 'requests' and 'tldextract' but the skill does not declare or provide installation instructions for these dependencies — you'll need to ensure the runtime environment has them.
Credentials
The SKILL.md requires a single credential (SERPER_API_KEY), which is proportional. However the registry metadata at the top of the evaluation lists 'Required env vars: none' while SKILL.md includes the env requirement and primaryEnv. This mismatch should be resolved before trusting automated installs/credential provisioning.
Persistence & Privilege
The skill does not request always:true and has no install hooks or config paths. It does not ask for system-wide permissions or other skills' credentials.
Assessment
This skill appears to do what it says: programmatic Google-style search via Serper.dev using SERPER_API_KEY. Before installing: 1) Confirm the source/owner and prefer skills with a homepage or repository (this skill lists 'source: unknown'). 2) Provide only the SERPER_API_KEY credential and verify its scope/usage limits in your Serper account. 3) Ensure your runtime environment has the Python dependencies used in examples (requests, tldextract) or add explicit install steps. 4) Be aware the Places endpoint can return phone numbers and other contact info — confirm this is allowed for your use case and complies with privacy laws and your organization's policies. 5) Fix the metadata mismatch (registry says no env vars while SKILL.md requires SERPER_API_KEY) before enabling automated credential provisioning. If you need stronger assurance, ask the publisher for a link to a homepage or source repo and for explicit dependency/install instructions.Like a lobster shell, security has layers — review code before you run it.
latest
Serper.dev Search Skill
Fast, cheap Google Search API. Returns clean JSON. Best choice for lead generation pipelines.
Credentials
Requires SERPER_API_KEY in environment.
Get a free key (2,500 searches) at serper.dev.
SERPER_API_KEY=your_key_here
Quick Usage
import requests, os
def serper_search(query: str, num: int = 10, gl: str = "us") -> list[dict]:
"""
Search Google via Serper.dev.
Returns list of: {title, link, snippet, displayedLink, domain}
"""
headers = {
"X-API-KEY": os.environ["SERPER_API_KEY"],
"Content-Type": "application/json"
}
payload = {"q": query, "num": num, "gl": gl}
r = requests.post("https://google.serper.dev/search", headers=headers, json=payload)
r.raise_for_status()
results = r.json().get("organic", [])
# Add clean domain extraction
import tldextract
for result in results:
ext = tldextract.extract(result["link"])
result["domain"] = f"{ext.domain}.{ext.suffix}"
return results
Lead Generation Usage
import tldextract
EXCLUDE_DOMAINS = {
"yelp.com", "facebook.com", "tripadvisor.com", "houzz.com",
"thumbtack.com", "homeadvisor.com", "angi.com", "angieslist.com",
"bbb.org", "yellowpages.com", "manta.com", "bark.com",
"google.com", "nextdoor.com", "linkedin.com", "instagram.com"
}
def search_business_leads(niche: str, city: str, num_queries: int = 3) -> list[str]:
"""
Run multiple queries for a niche+city combo.
Returns deduplicated list of direct business website URLs.
"""
query_templates = [
f'"{niche} {city}"',
f'"{niche} company {city}"',
f'"{niche} service {city}"',
f'"{city} {niche} contractor"',
]
seen_domains = set()
urls = []
for template in query_templates[:num_queries]:
query = template + " -yelp -facebook -tripadvisor -houzz -thumbtack -homeadvisor"
results = serper_search(query, num=10)
for r in results:
domain = r["domain"]
if domain not in EXCLUDE_DOMAINS and domain not in seen_domains:
seen_domains.add(domain)
urls.append(r["link"])
return urls
Available Endpoints
| Endpoint | URL | Use |
|---|---|---|
| Web search | https://google.serper.dev/search | Organic results |
| News | https://google.serper.dev/news | News articles |
| Images | https://google.serper.dev/images | Image results |
| Places | https://google.serper.dev/places | Google Maps results |
| Shopping | https://google.serper.dev/shopping | Product results |
Places Endpoint (Local Business Discovery Alternative)
def serper_places(query: str, gl: str = "us") -> list[dict]:
"""
Search Google Maps / Local Places.
Returns: {title, address, phone, website, rating, reviews}
"""
headers = {"X-API-KEY": os.environ["SERPER_API_KEY"], "Content-Type": "application/json"}
payload = {"q": query, "gl": gl}
r = requests.post("https://google.serper.dev/places", headers=headers, json=payload)
return r.json().get("places", [])
# Places returns website URLs + phone numbers directly — great for lead gen!
# usage: results = serper_places("landscaping company Portland OR")
# each result: {title, address, phone, website, rating, ratingCount}
Response Structure
{
"organic": [
{
"title": "Green Valley Landscaping | Portland OR",
"link": "https://www.greenvalleylandscaping.com",
"snippet": "Professional landscaping services in Portland...",
"displayedLink": "www.greenvalleylandscaping.com",
"position": 1
}
],
"searchParameters": {"q": "landscaping portland OR", "gl": "us", "num": 10}
}
Pricing Reference
| Plan | Cost | Searches |
|---|---|---|
| Free | $0 | 2,500 searches |
| Starter | $50 | 50,000 searches |
| Standard | $100 | 150,000 searches |
Rate Limits & Best Practices
- No hard rate limit stated — but be reasonable
- Add 0.5s delay between rapid consecutive calls
- Cache results to avoid re-querying same niche+city
- Use
num=10(default) — max is 100 per request glparam controls country: "us", "gb", "au", "ca", etc.
Error Handling
def serper_search_safe(query: str, num: int = 10) -> list[dict]:
try:
return serper_search(query, num)
except requests.exceptions.HTTPError as e:
if e.response.status_code == 429:
print("Rate limited — waiting 5s")
import time; time.sleep(5)
return serper_search(query, num)
print(f"Serper error: {e}")
return []
except Exception as e:
print(f"Unexpected error: {e}")
return []
Comments
Loading comments...
