Querit Search

v0.1.0

Web search via Querit.ai API. Use when you need to search the web for documentation, current events, facts, or any web content. Returns structured results with titles, URLs, and snippets.

3· 1.8k·0 current·0 all-time
byKyle Sun@interskh
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description align with what the code does. The skill requires a single credential (QUERIT_API_KEY) which is used as a Bearer token to call https://api.querit.ai/v1/search. Search and content-extraction functionality is implemented in search.js and content.js respectively, which is appropriate for the described purpose.
Instruction Scope
SKILL.md and the CLI only instruct the agent to call the Querit API and (optionally) fetch and extract page content. That matches the stated purpose. A notable operational detail: content.js will fetch arbitrary URLs from the host where the skill runs and return extracted page content. This is expected for a content-extraction feature, but it means the skill can be used to load internal or protected endpoints (SSRF-like risk) if asked to fetch internal URLs. The instructions do not read unrelated env vars or config files.
Install Mechanism
The install flow is Node/npm-based (package.json + package-lock) and an install.sh that downloads files from raw.githubusercontent.com/interskh/querit-search/main and runs npm ci/install. GitHub raw and npm are common package hosts, but running a remote curl | bash installer and installing npm dependencies carries moderate risk (dependencies can include lifecycle scripts). The install script itself is straightforward and writes into ~/.openclaw/skills/querit-search; no obscure external downloads or archives are used beyond npm and GitHub raw.
Credentials
Only QUERIT_API_KEY is required and used as the primary credential for the Querit API. The skill documents alternative config/storage in OpenClaw config or .env; there are no unrelated or excessive environment variable requirements.
Persistence & Privilege
always is false. The skill installs under the user's skills directory and does not modify other skills or system-wide settings. It can be invoked autonomously by the agent (platform default), which is expected for a search skill; consider the normal autonomy considerations when granting runtime capability to call external APIs and fetch URLs.
Assessment
This skill appears to do what it says: it needs only your Querit API key and performs searches against Querit.ai and (optionally) fetches pages to extract readable content. Before installing: (1) Prefer inspecting files locally rather than piping a remote install script into bash — clone the repo and run npm ci yourself. (2) Be aware npm will install dependencies from the public registry; if you need stronger guarantees, audit the dependency tree or run it in a sandbox. (3) Avoid asking the skill to fetch internal or sensitive URLs (e.g., 169.254.x.x, 127.0.0.1, internal hostnames) because content.js will make HTTP requests from your environment and could disclose internal content to the agent. (4) Only provide a Querit API key you are comfortable using with this skill and consider creating a limited/revocable key if Querit supports that. If you want, I can point out the exact lines to edit if you prefer to remove the curl|bash installer or restrict content.js to a safe list of hosts.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔎 Clawdis
EnvQUERIT_API_KEY
Primary envQUERIT_API_KEY

Install

Install npm dependencies
latestvk973eqg9vzg6r1bfr4pxy2mawd80fg19
1.8kdownloads
3stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

Querit Search

Web search and content extraction via the Querit.ai API. No browser required.

Setup

Needs env: QUERIT_API_KEY — get a free key at https://querit.ai (1,000 queries/month).

Search

node {baseDir}/search.js "query"                          # 5 results (default)
node {baseDir}/search.js "query" -n 10                    # more results (max 100)
node {baseDir}/search.js "query" --lang english            # language filter
node {baseDir}/search.js "query" --country "united states" # country filter
node {baseDir}/search.js "query" --date w1                 # past week (d1/w1/m1/y1)
node {baseDir}/search.js "query" --site-include github.com # only this domain
node {baseDir}/search.js "query" --site-exclude reddit.com # exclude domain
node {baseDir}/search.js "query" --content                 # also extract page content
node {baseDir}/search.js "query" --json                    # raw JSON output

Flags can be combined:

node {baseDir}/search.js "react hooks" -n 3 --lang english --site-include reactjs.org --content

Extract Page Content

node {baseDir}/content.js https://example.com/article

Fetches a URL and extracts the main readable content as markdown.

Output Format

Search results (default)

1. Page Title
   https://example.com/page
   Site: example.com
   Age: 3 days ago
   Description snippet from search results

2. Another Page
   ...

With --content

After the result listing, each page's extracted markdown content is appended:

### 1. Page Title
URL: https://example.com/page

# Extracted heading
Extracted body content in markdown...

---

With --json

Raw JSON array of result objects with fields: url, title, snippet, page_age, page_time.

When to Use

  • Searching for documentation, API references, or tutorials
  • Looking up facts, current events, or recent information
  • Finding content from specific websites (use --site-include)
  • Fetching and reading a web page's content (use --content or content.js)
  • Any task requiring web search without interactive browsing

Limitations

  • Query limited to 72 characters (auto-truncated with warning)
  • Max 100 results per query
  • Max 20 domains per site filter
  • Free tier: 1,000 queries/month, 1 QPS
  • Supported languages: english, japanese, korean, german, french, spanish, portuguese

Comments

Loading comments...