Multi Search Engine 2.1.3

v1.0.0

Multi search engine integration with 16 engines (7 CN + 9 Global). Supports advanced search operators, time filters, site search, privacy engines, and Wolfra...

0· 87·2 current·4 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jcdentoncore/multi-search-engine-2-1-3.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Multi Search Engine 2.1.3" (jcdentoncore/multi-search-engine-2-1-3) from ClawHub.
Skill page: https://clawhub.ai/jcdentoncore/multi-search-engine-2-1-3
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install jcdentoncore/multi-search-engine-2-1-3

ClawHub CLI

Package manager switcher

npx clawhub@latest install multi-search-engine-2-1-3
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (multi search engine aggregator) align with the actual artifacts: config.json lists 16 search engines and SKILL.md shows URL patterns and examples. No unrelated binaries or credentials are requested.
Instruction Scope
Instructions stay within search aggregation: they call web_fetch against the declared search URLs, perform in‑memory cookie handling on 403/429, and aggregate results. One minor gap: the doc states 'Respect robots.txt' and 'rate limiting' but does not show explicit steps to fetch/parse robots.txt or enforce rate limits — this is a behavioral promise rather than an implemented spec in the instructions.
Install Mechanism
Instruction-only skill with no install spec and no code to execute locally. This minimizes disk persistence and installation risk.
Credentials
No environment variables, API keys, or config paths are required. The cookie workflow is limited to session cookies for search engine domains and is claimed to be memory-only; that is proportionate to the stated purpose.
Persistence & Privilege
Skill is not always-enabled and makes no requests to modify other skills or system config. It does not request persistent privileges.
Assessment
This skill appears coherent and low-risk from a configuration perspective: it needs no credentials and only issues web requests to the listed search engines. Before installing, consider: (1) legal/ToS: automated queries to search engines can violate their terms — ensure you have the right to run automated searches and abide by robots.txt; (2) rate limiting: confirm the platform's web_fetch enforces the claimed delays and batching to avoid being blocked; (3) cookie handling: verify the implementation truly keeps cookies only in memory and never persists them; (4) sensitive queries: the skill supports advanced operators (site:, filetype:, intext:) which can be used to locate exposed sensitive data — ensure your usage is ethical and lawful; (5) audit outputs: aggregated results may include PII or confidential links, so treat outputs appropriately. If you need stronger assurance, request the concrete runtime implementation (the code that performs web_fetch, robots.txt checks, cookie lifecycle) and review it to validate the claimed behaviors.

Like a lobster shell, security has layers — review code before you run it.

latestvk9755pe2a3sft8pm8nwy6m2emd84z2zd
87downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Multi Search Engine

Integration of 16 search engines for web crawling without API keys.

Workflow

  1. Preparation: AI Agent initializes an empty in-memory cookie store. Cookies are only acquired dynamically during search operations when access is denied

  2. Language Evaluation: Detect the language attribute of the search query. If the query is in Chinese, use Domestic search engines (Baidu, Bing CN, Bing INT, 360, Sogou, WeChat, Shenma). If the query is non-Chinese, use International search engines (Google, Google HK, DuckDuckGo, Yahoo, Startpage, Brave, Ecosia, Qwant, WolframAlpha). Select engines based on query relevance and availability.

  3. Controlled Search: Use web_fetch to execute search requests with rate limiting:

    • Add 1-2 second delay between requests to respect server load
    • Batch requests in groups of 3-4 engines with sequential execution between batches
    • Include standard browser headers to identify as legitimate user agent
    • If access is denied (403/429), fetch engine homepage to obtain fresh session cookies
  4. Cookie Management:

    • Cookies are stored ONLY in memory during runtime
    • Cookies are acquired on-demand when search requests fail
    • No cookies are read from or written to config.json or any file
    • Cookies are cleared after search session completes
    • Only session cookies from search engine domains are captured
  5. Retry Mechanism: If a search fails due to cookie/session issues, retry once with freshly acquired cookies after a 2-second delay

  6. Result Aggregation: Consolidate successful results from search engines, organize and summarize them to output a core search report

Search Engines

Domestic (7)

  • Baidu: https://www.baidu.com/s?wd={keyword}
  • Bing CN: https://cn.bing.com/search?q={keyword}&ensearch=0
  • Bing INT: https://cn.bing.com/search?q={keyword}&ensearch=1
  • 360: https://www.so.com/s?q={keyword}
  • Sogou: https://sogou.com/web?query={keyword}
  • WeChat: https://wx.sogou.com/weixin?type=2&query={keyword}
  • Shenma: https://m.sm.cn/s?q={keyword}

International (9)

  • Google: https://www.google.com/search?q={keyword}
  • Google HK: https://www.google.com.hk/search?q={keyword}
  • DuckDuckGo: https://duckduckgo.com/html/?q={keyword}
  • Yahoo: https://search.yahoo.com/search?p={keyword}
  • Startpage: https://www.startpage.com/sp/search?query={keyword}
  • Brave: https://search.brave.com/search?q={keyword}
  • Ecosia: https://www.ecosia.org/search?q={keyword}
  • Qwant: https://www.qwant.com/?q={keyword}
  • WolframAlpha: https://www.wolframalpha.com/input?i={keyword}

Quick Examples

// Basic search
web_fetch({"url": "https://www.google.com/search?q=python+tutorial"})

// Site-specific
web_fetch({"url": "https://www.google.com/search?q=site:github.com+react"})

// File type
web_fetch({"url": "https://www.google.com/search?q=machine+learning+filetype:pdf"})

// Time filter (past week)
web_fetch({"url": "https://www.google.com/search?q=ai+news&tbs=qdr:w"})

// Privacy search
web_fetch({"url": "https://duckduckgo.com/html/?q=privacy+tools"})

// DuckDuckGo Bangs
web_fetch({"url": "https://duckduckgo.com/html/?q=!gh+tensorflow"})

// Knowledge calculation
web_fetch({"url": "https://www.wolframalpha.com/input?i=100+USD+to+CNY"})

Advanced Operators

OperatorExampleDescription
site:site:github.com pythonSearch within site
filetype:filetype:pdf reportSpecific file type
"""machine learning"Exact match
-python -snakeExclude term
ORcat OR dogEither term

Time Filters

ParameterDescription
tbs=qdr:hPast hour
tbs=qdr:dPast day
tbs=qdr:wPast week
tbs=qdr:mPast month
tbs=qdr:yPast year

Privacy Engines

  • DuckDuckGo: No tracking
  • Startpage: Google results + privacy
  • Brave: Independent index
  • Qwant: EU GDPR compliant

Bangs Shortcuts (DuckDuckGo)

BangDestination
!gGoogle
!ghGitHub
!soStack Overflow
!wWikipedia
!ytYouTube

WolframAlpha Queries

  • Math: integrate x^2 dx
  • Conversion: 100 USD to CNY
  • Stocks: AAPL stock
  • Weather: weather in Beijing

Documentation

  • references/advanced-search.md - Domestic search guide
  • references/international-search.md - International search guide
  • CHANGELOG.md - Version history

License

MIT

Security & Privacy Notice

Cookie Handling

  • Purpose: Cookies are used ONLY to maintain search session state when access is denied (403/429 errors)
  • Storage: Cookies are kept STRICTLY in memory during runtime - NEVER persisted to disk or config files
  • Acquisition: Cookies are acquired on-demand from search engine homepages only when search requests fail
  • Scope: Only session cookies from the specific search engine domain are captured
  • Lifecycle: Cookies are cleared immediately after the search session completes
  • No Pre-configuration: No cookies are loaded from config.json or any external file at startup
  • No API Keys: This tool uses standard web search URLs, no authentication required

Crawling Ethics

  • Rate Limiting: Implement reasonable delays between requests (recommend 1-2 seconds)
  • Respect robots.txt: Honor search engine crawling policies
  • Terms of Service: Users are responsible for complying with search engine ToS
  • Purpose: Designed for legitimate search aggregation, not mass data scraping

Data Handling

  • No Personal Data: Tool does not collect or transmit user personal information
  • Local Execution: All operations run locally, no external data transmission
  • Session Isolation: Cookies are session-specific and cleared after use

Comments

Loading comments...