Web Search

v1.3.0

通用网络搜索技能,支持多引擎搜索(百度、必应、DuckDuckGo),无需API密钥即可获取实时信息

9· 4.2k·24 current·25 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for yejinlei/web-search-ex-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Web Search" (yejinlei/web-search-ex-skill) from ClawHub.
Skill page: https://clawhub.ai/yejinlei/web-search-ex-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install web-search-ex-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install web-search-ex-skill
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (multi-engine web search without API keys) aligns with included code and declared dependencies (baidusearch, Playwright, crawl4ai, requests). The script implements search and crawl functions and uses the listed libraries.
Instruction Scope
SKILL.md instructs the agent to run scripts/web_search.py (entry point: main) to perform 'search', 'deep_search', and 'crawl' actions. Those actions necessarily fetch arbitrary web pages and may run browser automation (Playwright). The instructions do not request unrelated files or env vars, but 'crawl' allows fetching arbitrary URLs (risk of accessing internal resources/SSRF) and 'deep_search' may fetch and extract page contents. This is expected for a crawler/search skill but increases attack surface and data exposure.
Install Mechanism
No explicit install spec in registry (instruction-only), but the package includes requirements.txt and Python code. Dependencies are standard PyPI packages. Playwright will download a Chromium browser on first run (~100MB) as noted — expected but a side-effect. Using third-party packages (crawl4ai, baidusearch, uv) is normal but introduces supply-chain risk; the setup.py references a GitHub URL but README.md referenced in setup.py is not present in the manifest (minor inconsistency).
Credentials
The skill does not request any environment variables, primary credentials, or config paths. The code shown does not access local secrets or unusual system configuration. Required libraries are consistent with web scraping/search functionality.
Persistence & Privilege
Flags show no always:true or other elevated persistence. The skill is user-invocable and can be invoked autonomously (platform default) but does not request permanent presence or modify other skills' configurations.
Assessment
This skill appears to be what it claims (a scraper-based web search), but before installing: 1) review the full web_search.py source (the provided file was partially truncated here) to ensure there are no hidden network callbacks or unexpected data exfiltration; 2) evaluate the trustworthiness of third-party dependencies (crawl4ai, baidusearch, uv) and consider pinning versions or auditing those packages; 3) run the skill in an isolated/sandboxed environment because it will fetch arbitrary URLs and Playwright will download/execute a browser; 4) avoid giving it sensitive internal URLs to crawl (SSRF / internal-data exposure risk); and 5) if you need stronger guarantees, request the upstream homepage/repo or a signed release so you can verify provenance.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fn83h35vhdvyj8a9waf8r4982ns97
4.2kdownloads
9stars
1versions
Updated 1mo ago
v1.3.0
MIT-0

Web Search Skill

A powerful web search skill supporting multiple search engines without requiring API keys.

Features

  • 🔍 Multi-Engine Support: Baidu (Playwright), Bing, DuckDuckGo
  • 🌐 No API Key Required: Uses browser automation and web scraping
  • 🔄 Smart Fallback: Automatically switches engines when one fails
  • 📊 Structured Results: Returns clean search results with title, URL, and snippet
  • 🚀 High Performance: Async support with Playwright browser automation

Usage

Basic Search

result = main({
    "action": "search",
    "query": "Python tutorial",
    "num_results": 5
})

Deep Search

result = main({
    "action": "deep_search",
    "query": "machine learning latest research",
    "num_results": 5
})

Web Page Crawling

result = main({
    "action": "crawl",
    "url": "https://example.com"
})

Input Parameters

ParameterTypeRequiredDescription
actionstringYesOperation type: "search", "deep_search", or "crawl"
querystringConditionalSearch query (required for search/deep_search)
urlstringConditionalTarget URL (required for crawl)
num_resultsintNoNumber of results, default 5, max 20
regionstringNoRegion code, default 'cn-zh'

Output Format

Search Result

{
    "success": True,
    "query": "search query",
    "engine": "baidu+playwright",
    "num_results": 5,
    "results": [
        {
            "title": "Result title",
            "href": "https://...",
            "body": "Snippet content"
        }
    ],
    "message": "Search completed"
}

Deep Search Result

{
    "success": True,
    "query": "search query",
    "search_results": [...],
    "detailed_info": {
        "extracted_content": "..."
    },
    "message": "Deep search completed"
}

Execution

type: script script_path: scripts/web_search.py entry_point: main dependencies:

  • uv>=0.1.0
  • requests>=2.28.0
  • baidusearch>=1.0.3
  • crawl4ai>=0.8.0
  • playwright>=1.40.0

Search Strategy

  1. Primary: baidusearch library (fastest, no browser)
  2. Secondary: Playwright + Baidu (most reliable, bypasses anti-bot)
  3. Tertiary: DuckDuckGo (privacy-focused)
  4. Fallback: Bing (international)

Notes

  1. First Run: Playwright will download Chromium browser on first use (~100MB)
  2. Rate Limiting: Be mindful of search frequency to avoid temporary blocks
  3. Network: Requires internet connection
  4. Results: May vary based on search engine algorithms and location

Error Handling

  • Returns {"success": False, "message": "..."} on errors
  • Automatically retries with fallback engines
  • Graceful degradation when optional dependencies are missing

Comments

Loading comments...