Vercel Ai Sdk

v1.0.0

Vercel AI SDK for building chat interfaces with streaming. Use when implementing useChat hook, handling tool calls, streaming responses, or building chat UI....

0· 109·2 current·2 all-time
byKevin Anderson@anderskev
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (Vercel AI SDK for streaming chat, useChat, tools, etc.) matches the SKILL.md content and reference files. All required behaviors (streaming, tool handling, UI patterns) are documented and consistent with the stated purpose.
Instruction Scope
SKILL.md is documentation and example code showing client/server patterns, useChat, streamText, tool execution, and SSE parsing. The instructions do not direct the agent to read unrelated system files or hidden endpoints. They do show examples that save messages to a database or call external APIs (e.g., fetchWeather, EventSource('/api/chat')), which is expected for this SDK but means implementers must provide those server endpoints and storage themselves.
Install Mechanism
No install spec and no code files to execute — lowest-risk instruction-only skill. Nothing is downloaded or written to disk by the skill bundle itself.
Credentials
The skill declares no required environment variables or credentials. Example code references model providers (e.g., openai('gpt-4')) which in real deployments will require provider API keys or configuration, but those are not requested by this skill (appropriate for a docs-only skill). Users should be aware they will need to supply provider credentials and any DB/API keys when implementing the examples.
Persistence & Privilege
always is false and the skill is user-invocable only. There is no attempt to modify other skills or system configuration; no persistent presence is requested.
Assessment
This skill is documentation and example code for the Vercel AI SDK — it does not include executables or request credentials. Before using the examples, ensure you provision and secure any model provider credentials (e.g., OpenAI API key), host the server endpoint (/api/chat) shown in examples, and secure any database/storage used for saving messages. When implementing client-side tools that access sensitive data (location, files, external APIs), review and restrict what is sent to the server and where credentials are stored. If you plan to run third-party code derived from these examples, vet the actual packages you install and their sources.

Like a lobster shell, security has layers — review code before you run it.

latestvk977rnmzp45p5jagcyjn2wb6hx839nye

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Vercel AI SDK

The Vercel AI SDK provides React hooks and server utilities for building streaming chat interfaces with support for tool calls, file attachments, and multi-step reasoning.

Quick Reference

Basic useChat Setup

import { useChat } from '@ai-sdk/react';

const { messages, status, sendMessage, stop, regenerate } = useChat({
  id: 'chat-id',
  messages: initialMessages,
  onFinish: ({ message, messages, isAbort, isError }) => {
    console.log('Chat finished');
  },
  onError: (error) => {
    console.error('Chat error:', error);
  }
});

// Send a message
sendMessage({ text: 'Hello', metadata: { createdAt: Date.now() } });

// Send with files
sendMessage({
  text: 'Analyze this',
  files: fileList // FileList or FileUIPart[]
});

ChatStatus States

The status field indicates the current state of the chat:

  • ready: Chat is idle and ready to accept new messages
  • submitted: Message sent to API, awaiting response stream start
  • streaming: Response actively streaming from the API
  • error: An error occurred during the request

Message Structure

Messages use the UIMessage type with a parts-based structure:

interface UIMessage {
  id: string;
  role: 'system' | 'user' | 'assistant';
  metadata?: unknown;
  parts: Array<UIMessagePart>; // text, file, tool-*, reasoning, etc.
}

Part types include:

  • text: Text content with optional streaming state
  • file: File attachments (images, documents)
  • tool-{toolName}: Tool invocations with state machine
  • reasoning: AI reasoning traces
  • data-{typeName}: Custom data parts

Server-Side Streaming

import { streamText } from 'ai';
import { convertToModelMessages } from 'ai';

const result = streamText({
  model: openai('gpt-4'),
  messages: convertToModelMessages(uiMessages),
  tools: {
    getWeather: tool({
      description: 'Get weather',
      inputSchema: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        return { temperature: 72, weather: 'sunny' };
      }
    })
  }
});

return result.toUIMessageStreamResponse({
  originalMessages: uiMessages,
  onFinish: ({ messages }) => {
    // Save to database
  }
});

Tool Handling Patterns

Client-Side Tool Execution:

const { addToolOutput } = useChat({
  onToolCall: async ({ toolCall }) => {
    if (toolCall.toolName === 'getLocation') {
      addToolOutput({
        tool: 'getLocation',
        toolCallId: toolCall.toolCallId,
        output: 'San Francisco'
      });
    }
  }
});

Rendering Tool States:

{message.parts.map(part => {
  if (part.type === 'tool-getWeather') {
    switch (part.state) {
      case 'input-streaming':
        return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
      case 'input-available':
        return <div>Getting weather for {part.input.city}...</div>;
      case 'output-available':
        return <div>Weather: {part.output.weather}</div>;
      case 'output-error':
        return <div>Error: {part.errorText}</div>;
    }
  }
})}

Reference Files

Detailed documentation on specific aspects:

Common Patterns

Error Handling

const { error, clearError } = useChat({
  onError: (error) => {
    toast.error(error.message);
  }
});

// Clear error and reset to ready state
if (error) {
  clearError();
}

Message Regeneration

const { regenerate } = useChat();

// Regenerate last assistant message
await regenerate();

// Regenerate specific message
await regenerate({ messageId: 'msg-123' });

Custom Transport

import { DefaultChatTransport } from 'ai';

const { messages } = useChat({
  transport: new DefaultChatTransport({
    api: '/api/chat',
    prepareSendMessagesRequest: ({ id, messages, trigger, messageId }) => ({
      body: {
        chatId: id,
        lastMessage: messages[messages.length - 1],
        trigger,
        messageId
      }
    })
  })
});

Performance Optimization

// Throttle UI updates to reduce re-renders
const chat = useChat({
  experimental_throttle: 100 // Update max once per 100ms
});

Automatic Message Sending

import { lastAssistantMessageIsCompleteWithToolCalls } from 'ai';

const chat = useChat({
  sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls
  // Automatically resend when all tool calls have outputs
});

Type Safety

The SDK provides full type inference for tools and messages:

import { InferUITools, UIMessage } from 'ai';

const tools = {
  getWeather: tool({
    inputSchema: z.object({ city: z.string() }),
    execute: async ({ city }) => ({ weather: 'sunny' })
  })
};

type MyMessage = UIMessage<
  { createdAt: number }, // Metadata type
  UIDataTypes,
  InferUITools<typeof tools> // Tool types
>;

const { messages } = useChat<MyMessage>();

Key Concepts

Parts-Based Architecture

Messages use a parts array instead of a single content field. This allows:

  • Streaming text while maintaining other parts
  • Tool calls with independent state machines
  • File attachments and custom data mixed with text

Tool State Machine

Tool parts progress through states:

  1. input-streaming: Tool input streaming (optional)
  2. input-available: Tool input complete
  3. approval-requested: Waiting for user approval (optional)
  4. approval-responded: User approved/denied (optional)
  5. output-available: Tool execution complete
  6. output-error: Tool execution failed
  7. output-denied: User denied approval

Streaming Protocol

The SDK uses Server-Sent Events (SSE) with UIMessageChunk types:

  • text-start, text-delta, text-end
  • tool-input-available, tool-output-available
  • reasoning-start, reasoning-delta, reasoning-end
  • start, finish, abort

Client vs Server Tools

Server-side tools have an execute function and run on the API route.

Client-side tools omit execute and are handled via onToolCall and addToolOutput.

Best Practices

  1. Always handle the error state and provide user feedback
  2. Use experimental_throttle for high-frequency updates
  3. Implement proper loading states based on status
  4. Type your messages with custom metadata and tools
  5. Use sendAutomaticallyWhen for multi-turn tool workflows
  6. Handle all tool states in the UI for better UX
  7. Use stop() to allow users to cancel long-running requests
  8. Validate messages with validateUIMessages on the server

Files

5 total
Select a file
Select a file to preview.

Comments

Loading comments…