Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Exa Search (Rust)

Neural web search, similar-page discovery, and URL content fetching via the Exa AI search engine. USE WHEN: user asks to search the web, find articles/repos/...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 328 · 5 current installs · 5 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the implementation: the Rust binary implements search, find_similar, and get_contents and talks to https://api.exa.ai. Requiring EXA_API_KEY and cargo/bash for a one-time build is proportionate.
Instruction Scope
Runtime instructions stick to the stated purpose (build/run the binary, read EXA_API_KEY from ~/.openclaw/workspace/.env, pass JSON via stdin). Minor issues: SKILL.md and README reference the install path ~/.openclaw/.../skills/exa-search/bin/exa-search while install.sh copies to ~/.openclaw/.../skills/exa-search-rust/bin/exa-search — this path/name mismatch may cause confusion or broken example commands but does not indicate malicious behavior.
Install Mechanism
Installer is a local install.sh that invokes `cargo build --release` on included Rust source and copies the resulting binary into the workspace. No external arbitrary downloads or URL-extraction steps; upstream crates will be fetched from crates.io via cargo (expected).
Credentials
Only EXA_API_KEY is required/declared (primaryEnv). The SKILL.md helpers read the EXA_API_KEY line from ~/.openclaw/workspace/.env (they only grep for EXA_API_KEY=). The binary validates the key format and does not access other environment variables or sensitive system paths.
Persistence & Privilege
The skill is not always-enabled and can be invoked by the user. install.sh writes files under the user's ~/.openclaw/workspace/skills/ directory (its own skill dir) — standard behavior for a skill installation and not an elevation of privilege or modification of other skills' configs.
Assessment
This package appears to be what it claims: a native Exa AI search client that requires one API key. Before installing: 1) Inspect install.sh (it builds the included Rust source with cargo and copies the binary to your OpenClaw workspace). Note the example commands reference a directory named `exa-search` but install.sh uses `exa-search-rust` — confirm/install path and adjust commands. 2) Only provide EXA_API_KEY (store it in ~/.openclaw/workspace/.env as instructed). 3) Building uses cargo which will fetch crates from crates.io — if you have policies about third-party crates, audit Cargo.toml. 4) Confirm you trust the Exa API endpoint (api.exa.ai) and treat the API key as a secret: use least-privilege keys and monitor usage. If you want higher assurance, run the build in an isolated environment or review the compiled binary before installing.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.3
Download zip
latestvk970m6zxwsvts39qjcp8ry4jex81y4qw

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔍 Clawdis
Binsbash, cargo
EnvEXA_API_KEY
Primary envEXA_API_KEY

SKILL.md

exa-search skill

Use this skill to search the web, find similar pages, or fetch page contents via the Exa AI search engine — fast, neural, and certificate-aware.

The skill invokes a native Rust binary (bin/exa-search) via Bash. Run install.sh once to build it.


When to Use / When NOT to Use

USE when:

  • Searching the web for articles, documentation, repos, papers, tools, people, companies
  • Finding recent news or announcements (use livecrawl: "fallback" or "always" for recency)
  • Fetching full text of a known URL without browser automation
  • Finding pages similar to a reference URL (competitor analysis, alternative tools)
  • Any web lookup that isn't a specific tweet or video

NOT FOR:

  • Fetching tweets/X posts → use fxtwitter skill (Exa can't fetch tweet URLs)
  • Downloading video/audio → use yt-dlp
  • Scraping dynamic or Cloudflare-protected pages → use scrapling
  • Local file/code search → use rg, find, or grep
  • Querying structured APIs (GitHub, weather, etc.) → use their dedicated skills
  • When the query is already a direct URL with known content → prefer get_contents action

Prerequisites

  • EXA_API_KEY set in ~/.openclaw/workspace/.env (get one at exa.ai)
  • Rust installed (rustup) — only needed for the one-time build
  • bin/exa-search binary present (run bash install.sh to build)

Actions

1. Search

echo '{"query":"your query here","num_results":5}' \
  | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \
  ~/.openclaw/workspace/skills/exa-search/bin/exa-search

Full params:

{
  "query": "rust async programming",
  "num_results": 5,
  "type": "neural",
  "livecrawl": "never",
  "include_domains": ["github.com", "docs.rs"],
  "exclude_domains": ["reddit.com"],
  "start_published_date": "2025-01-01",
  "end_published_date": "2026-12-31",
  "category": "research paper",
  "use_autoprompt": true,
  "text": { "max_characters": 2000 },
  "highlights": { "num_sentences": 3, "highlights_per_url": 2 },
  "summary": { "query": "key takeaways" }
}

type options: auto (default) · neural · keyword

livecrawl options:

  • "never" — fastest (~300-600ms), pure cached index. Best for reference material, docs, courses.
  • "fallback" — use cache, crawl live if not cached. Good default.
  • "preferred" — prefer live crawl. Slower but fresher.
  • "always" — always crawl live. For breaking news or rapidly-changing pages.

2. Find Similar

Find pages similar to a given URL:

echo '{"action":"find_similar","url":"https://example.com","num_results":5}' \
  | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \
  ~/.openclaw/workspace/skills/exa-search/bin/exa-search

Params: same contents options as search (text, highlights, summary, livecrawl)


3. Get Contents

Fetch full contents for one or more URLs:

echo '{"action":"get_contents","urls":["https://example.com","https://other.com"],"text":{"max_characters":1000}}' \
  | EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"') \
  ~/.openclaw/workspace/skills/exa-search/bin/exa-search

Output format

All actions return JSON on stdout:

{
  "ok": true,
  "action": "search",
  "results": [
    {
      "url": "https://...",
      "title": "...",
      "score": 0.87,
      "author": "...",
      "published_date": "2026-01-15",
      "image": "https://...",
      "favicon": "https://...",
      "text": "...",
      "highlights": ["..."],
      "summary": "..."
    }
  ],
  "formatted": "## [Title](url)\n..."
}

On error:

{ "ok": false, "error": "..." }

The formatted field is ready-to-use markdown — you can send it directly to the user.


Speed reference (same query, 3 runs)

ModeAvgPeak
livecrawl: "never" (instant)~440ms308ms
Default (no livecrawl)~927ms629ms

~18.7× faster than Exa MCP at peak.


Helper: load API key

EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"')

Or export once at the top of a longer workflow:

export EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"')
echo '{"query":"..."}' | ~/.openclaw/workspace/skills/exa-search/bin/exa-search

Invocation pattern

EXA_API_KEY=$(grep -E "^EXA_API_KEY=" ~/.openclaw/workspace/.env | cut -d= -f2 | tr -d '"')
echo '{"query":"...","num_results":5,"livecrawl":"never"}' \
  | EXA_API_KEY="$EXA_API_KEY" ~/.openclaw/workspace/skills/exa-search/bin/exa-search

The formatted field in the output is ready-to-use markdown — send it directly to the user.


Mode selection (be deliberate, every search)

Situationlivecrawltype
Docs, tutorials, courses, reference material"never""neural"
General research — people, tools, concepts, companies"never""neural"
Exact function names, error messages, package names"never""keyword"
Recent releases, changelogs, GitHub repos"fallback""auto"
News or announcements from the last 1-2 weeks"fallback""neural"
Breaking news, live prices, today's events"always""neural"
Unsure"fallback""auto"

Default when in doubt: livecrawl: "never", type: "neural" — fastest, works for 80% of searches.


When to use each action

search (default) — use for any information retrieval from a query string.

find_similar — use when you have a URL and want more like it: related articles, alternative tools, similar repos, competing products.

echo '{"action":"find_similar","url":"https://...","num_results":5,"livecrawl":"never"}' | EXA_API_KEY="$EXA_API_KEY" ...

get_contents — use when you have a specific URL and need its full text: docs pages, blog posts, GitHub READMEs, papers. Faster than search when the URL is already known.

echo '{"action":"get_contents","urls":["https://..."],"text":{"max_characters":3000}}' | EXA_API_KEY="$EXA_API_KEY" ...

Enrich results for research tasks

When writing reports, summaries, or comparing multiple results — request highlights or summary per result:

{
  "query": "...",
  "num_results": 5,
  "highlights": { "num_sentences": 3, "highlights_per_url": 2 },
  "summary": { "query": "key takeaways for a developer" }
}

Note: highlights/summary add latency (~200-500ms extra). Only use when you actually need them.

Files

11 total
Select a file
Select a file to preview.

Comments

Loading comments…