Tavily Best Practices

PassAudited by ClawScan on May 10, 2026.

Overview

This is a documentation-only Tavily integration guide with expected API, package install, search, extraction, crawl, and research examples.

This skill appears safe as a reference guide. Before using its examples, verify the Tavily packages, keep your API key secret, and bound any search, extraction, or crawl jobs to the specific sites and content you intend to access.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Users will need to manage a Tavily API key for real API use; mishandling that key could allow others to use their Tavily account quota.

Why it was flagged

The skill documents use of a Tavily API key, which is expected for Tavily API integration and is not shown being logged, hardcoded beyond a placeholder, or sent to unrelated services.

Skill content
client = TavilyClient() ... # Option 1: Uses TAVILY_API_KEY env var (recommended) ... client = TavilyClient(api_key="tvly-YOUR_API_KEY")
Recommendation

Store the Tavily API key in an environment variable or secret manager, avoid committing it to source control, and use the least-privileged key settings available.

What this means

Installing third-party packages adds normal dependency supply-chain risk to the user's development environment.

Why it was flagged

The skill recommends installing Tavily client packages through package managers. This is purpose-aligned setup documentation, but users should still verify package provenance.

Skill content
pip install tavily-python ... npm install @tavily/core
Recommendation

Install from trusted package registries, verify package names and maintainers, and pin versions in production projects.

What this means

Crawling or extracting web content can make external requests and consume API quota; poorly scoped crawls can collect more content than intended.

Why it was flagged

The skill documents site crawling and content extraction capabilities. The examples include scoping parameters such as depth, chunks, and path filters, making the behavior purpose-aligned and bounded in the documentation.

Skill content
Content from entire site | `crawl()` ... response = client.crawl(url="https://docs.example.com", max_depth=2, instructions="Find API documentation pages", chunks_per_source=3, select_paths=["/docs/.*", "/api/.*"])
Recommendation

Use explicit target URLs, depth limits, page limits, and path filters, and ensure crawling complies with site terms and user intent.