Tavily Best Practices
PassAudited by ClawScan on May 1, 2026.
Overview
This is a documentation-only Tavily integration guide; it includes expected examples for package installation, API keys, web crawling/search, and RAG use, but no hidden code or suspicious behavior is evidenced.
This skill appears safe as a reference-only documentation package. Before using its examples, verify package sources, protect API keys, keep crawl limits conservative, and validate any web content saved into RAG or knowledge-base systems.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing packages adds third-party code to the user's development environment.
The skill is documentation-only, but it instructs users to install third-party SDK packages as part of normal Tavily setup.
pip install tavily-python ... npm install @tavily/core
Install only from trusted package registries, review package names carefully, and pin versions in production projects.
A Tavily API key may allow usage of the user's Tavily account and consume API credits.
The documentation expects a Tavily API key for service access, which is purpose-aligned for a Tavily integration guide.
# Uses TAVILY_API_KEY env var (recommended) client = TavilyClient()
Use environment variables or secret managers, avoid hardcoding real keys in source code, and scope/rotate keys according to Tavily guidance.
Misconfigured crawl settings could create unexpected costs, latency, or excessive site access.
The docs include examples of broad web crawling, which can generate many external requests and API usage, but this is central to the documented Tavily crawl feature and includes limiting guidance elsewhere.
response = client.crawl(
url="https://example.com",
max_depth=3,
max_breadth=100,
limit=1000,Start with conservative depth, breadth, and limit settings; respect robots.txt and site policies; and require user approval before large crawls.
Untrusted or inaccurate web content could be reused later by a RAG system if stored without validation.
The Hybrid RAG example stores externally retrieved web content in a local database for later retrieval, which can carry untrusted content into future model context.
Hybrid RAG Combine web search with local database retrieval. ... save_foreign=True # Store web results in DB
Store source metadata, validate retrieved content, filter untrusted sources, and avoid treating saved web content as authoritative instructions.
