Tavily Best Practices

PassAudited by ClawScan on May 1, 2026.

Overview

This is a documentation-only Tavily integration guide; it includes expected examples for package installation, API keys, web crawling/search, and RAG use, but no hidden code or suspicious behavior is evidenced.

This skill appears safe as a reference-only documentation package. Before using its examples, verify package sources, protect API keys, keep crawl limits conservative, and validate any web content saved into RAG or knowledge-base systems.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Installing packages adds third-party code to the user's development environment.

Why it was flagged

The skill is documentation-only, but it instructs users to install third-party SDK packages as part of normal Tavily setup.

Skill content
pip install tavily-python ... npm install @tavily/core
Recommendation

Install only from trusted package registries, review package names carefully, and pin versions in production projects.

What this means

A Tavily API key may allow usage of the user's Tavily account and consume API credits.

Why it was flagged

The documentation expects a Tavily API key for service access, which is purpose-aligned for a Tavily integration guide.

Skill content
# Uses TAVILY_API_KEY env var (recommended)
client = TavilyClient()
Recommendation

Use environment variables or secret managers, avoid hardcoding real keys in source code, and scope/rotate keys according to Tavily guidance.

What this means

Misconfigured crawl settings could create unexpected costs, latency, or excessive site access.

Why it was flagged

The docs include examples of broad web crawling, which can generate many external requests and API usage, but this is central to the documented Tavily crawl feature and includes limiting guidance elsewhere.

Skill content
response = client.crawl(
    url="https://example.com",
    max_depth=3,
    max_breadth=100,
    limit=1000,
Recommendation

Start with conservative depth, breadth, and limit settings; respect robots.txt and site policies; and require user approval before large crawls.

What this means

Untrusted or inaccurate web content could be reused later by a RAG system if stored without validation.

Why it was flagged

The Hybrid RAG example stores externally retrieved web content in a local database for later retrieval, which can carry untrusted content into future model context.

Skill content
Hybrid RAG

Combine web search with local database retrieval. ... save_foreign=True # Store web results in DB
Recommendation

Store source metadata, validate retrieved content, filter untrusted sources, and avoid treating saved web content as authoritative instructions.