NAT

v1.0.0

NVIDIA NeMo Agent Toolkit (NAT) — install, create workflows, add tools, run agents, evaluate performance, and publish as A2A/MCP servers. Use when: (1) insta...

0· 88·0 current·0 all-time
bySaurav@sauravdev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sauravdev/nat-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "NAT" (sauravdev/nat-skill) from ClawHub.
Skill page: https://clawhub.ai/sauravdev/nat-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install nat-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install nat-skill
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, the required binaries (nat, pip, uv), and the primary credential (NVIDIA_API_KEY) are consistent with installing and using the NVIDIA NeMo Agent Toolkit. Examples referencing S3, OpenAI, Bedrock, etc., are plausible optional integrations for this toolkit rather than unexplained or unrelated demands.
Instruction Scope
SKILL.md stays on-topic: install, configure, run workflows, create custom tools, and publish A2A/MCP servers. It does instruct the user to start network services (nat a2a serve) and to place credentials into workflow YAMLs (e.g., ${S3_ACCESS_KEY}), so workflows can cause the agent to access external networks and third-party services. This is expected for an agent toolkit but is a material operational/privacy consideration.
Install Mechanism
Instruction-only skill (no install spec) that recommends standard pip installs (via 'uv pip install' or 'pip install'). No unusual download URLs or archive extraction are present in the provided docs.
Credentials
The skill declares NVIDIA_API_KEY as primaryEnv which matches NIM usage. However the documentation shows many optional integrations (S3 access_key/secret, OpenAI/Azure/Bedrock keys, NAT_PUBLIC_BASE_URL, etc.). Those are not required by default but will be needed if you enable corresponding components—review any workflow configs for embedded secrets and be cautious about using broad credentials.
Persistence & Privilege
Skill is not marked always:true and is user-invocable; it does not request persistent system-wide privileges or modify other skills' configurations in the docs. Running A2A servers will open network endpoints by design, which is a normal capability for this toolkit.
Assessment
This skill appears to be what it claims: documentation for NVIDIA's NeMo Agent Toolkit. Before installing or running workflows, review any workflow YAMLs and example configs for embedded secrets (S3 keys, OpenAI/Azure keys, NAT_PUBLIC_BASE_URL) and avoid putting long-lived credentials into public configs. If you publish an A2A agent (nat a2a serve), understand you are opening an HTTP endpoint—use authentication and limit network exposure. When installing packages via pip, prefer official releases (GitHub/NVIDIA docs) and inspect any third-party example code you copy (custom tools can run arbitrary Python and use boto3/requests). If you need a higher-assurance review, provide the specific workflow YAMLs or example code you plan to run so those can be checked for secret leakage or unexpected network destinations.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Any binnat, pip, uv
Primary envNVIDIA_API_KEY
latestvk97ekmh7sfh4cr0frqcr419q4h844fmr
88downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

NVIDIA NeMo Agent Toolkit (NAT)

A flexible library for connecting enterprise agents to data sources and tools across any framework.

Installation

# Core (pick one)
uv pip install nvidia-nat        # recommended
pip install nvidia-nat

# With framework extras
uv pip install "nvidia-nat[langchain]"    # LangChain/LangGraph
uv pip install "nvidia-nat[llama-index]"  # LlamaIndex
uv pip install "nvidia-nat[crewai]"       # CrewAI
uv pip install "nvidia-nat[mcp]"          # MCP
uv pip install "nvidia-nat[a2a]"          # A2A
uv pip install "nvidia-nat[mem0ai]"       # Mem0 memory
uv pip install "nvidia-nat[eval,profiling]"  # Eval + profiling

# Verify
nat --help && nat --version

For development install from source, see references/install-from-source.md.

Quick Start

export NVIDIA_API_KEY=<key_from_build.nvidia.com>

Create workflow.yml:

functions:
  wikipedia_search:
    _type: wiki_search
    max_results: 2

llms:
  nim_llm:
    _type: nim
    model_name: meta/llama-3.1-70b-instruct
    temperature: 0.0

workflow:
  _type: react_agent
  tool_names: [wikipedia_search]
  llm_name: nim_llm
  verbose: true
  parse_agent_response_max_retries: 3
nat run --config_file workflow.yml --input "List five subspecies of Aardvarks"

Workflow Configuration Structure

Four main YAML sections:

SectionPurpose
functionsTools (web search, calculators, custom)
llmsLLM provider configs (NIM, OpenAI, Azure, Bedrock)
embeddersEmbedding models for vector storage
workflowAgent type + wiring of tools and LLMs

Agent Types (_type in workflow)

  • react_agent — Reasoning and acting
  • reasoning_agent — Advanced reasoning
  • rewwo_agent — Reasoning Without Observation
  • responses_api_agent — OpenAI Responses API
  • tool_calling_agent — Direct tool calling
  • automatic_memory_wrapper_agent — Adds memory
  • router_agent — Routes to different workflows
  • sequential_executor — Sequential tool execution

Built-in Tools (_type in functions)

wiki_search, webpage_query, tavily_internet_search, arxiv_search, current_datetime, calculator, text_file_ingest, and many more framework-specific tools.

List all available components:

nat info components -t function      # Tools
nat info components -t llm_provider  # LLMs
nat info components -t embedder      # Embedders

Common CLI Commands

# Run workflow
nat run --config_file workflow.yml --input "question"

# Override params without editing YAML
nat run --config_file workflow.yml --input "question" \
  --override llms.nim_llm.temperature 0.7 \
  --override llms.nim_llm.model_name meta/llama-3.3-70b-instruct

# Create new workflow template
nat workflow create --workflow-dir examples my_workflow

# Evaluate
nat eval --config_file eval_config.yml

# Profile
nat profiler --config_file workflow.yml --input "test"

# Red team
nat red-team --config_file workflow.yml

# Workflow management
nat workflow reinstall my_workflow
nat workflow delete my_workflow

Custom Tools and Function Groups

For creating custom tools, function groups, and advanced patterns, see:

A2A Server

Publish workflows as A2A agents for discovery and invocation by other A2A clients.

# Start A2A server
nat a2a serve --config_file workflow.yml

# Discover agent
nat a2a client discover --url http://localhost:10000

# Call agent
nat a2a client call --url http://localhost:10000 --message "What is 42 * 67?"

For full A2A configuration (auth, concurrency, Kubernetes), see references/a2a-server.md.

Examples

The repo includes examples organized by category: Getting Started, Agents, Advanced Agents, Control Flow, Frameworks, MCP/A2A, Evaluation, and more. See references/examples.md for the full catalog and how to run them.

# Run any example
uv pip install -e examples/<example_directory>
nat run --config_file examples/<example_directory>/configs/config.yml --input "test"

Comments

Loading comments...