Vercel AI SDK

The Vercel AI SDK is an elegant tool for building AI-powered applications. Braintrust natively supports tracing requests made with the Vercel AI SDK.

Vercel AI SDK v5 (wrapAISDK)

wrapAISDK wraps the top-level AI SDK functions (generateText, streamText, generateObject, streamObject) and automatically creates spans with full input/output logging, metrics, and tool call tracing.

trace-vercel-ai-sdk-v5.ts
import { initLogger, wrapAISDK } from "braintrust"; import * as ai from "ai"; import { openai } from "@ai-sdk/openai"; import { z } from "zod";   // `initLogger` sets up your code to log to the specified Braintrust project using your API key. // By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI. initLogger({  projectName: "My AI Project",  apiKey: process.env.BRAINTRUST_API_KEY, });   const { generateText } = wrapAISDK(ai);   async function main() {  // This will automatically log the request, response, and metrics to Braintrust  const { text } = await generateText({  model: openai("gpt-4"),  prompt: "What is the capital of France?",  });  console.log(text); }   main();

Tool calls with wrapAISDK

wrapAISDK automatically traces both the LLM's tool call suggestions and the actual tool executions. It supports both the array-based and object-based tools formats from the AI SDK.

wrap-ai-sdk-tools.ts
import { initLogger, wrapAISDK } from "braintrust"; import * as ai from "ai"; import { openai } from "@ai-sdk/openai"; import { z } from "zod";   initLogger({  projectName: "Tool Tracing",  apiKey: process.env.BRAINTRUST_API_KEY, });   const { generateText } = wrapAISDK(ai);   async function main() {  const { text } = await generateText({  model: openai("gpt-4"),  prompt: "What's the weather like in San Francisco?",  tools: {  getWeather: {  description: "Get weather for a location",  parameters: z.object({  location: z.string().describe("The city name"),  }),  // Tool executions are automatically wrapped and traced  execute: async ({ location }: { location: string }) => {  // This execution will appear as a child span  return {  location,  temperature: 72,  conditions: "sunny",  };  },  },  },  });    console.log(text); }   main();

Vercel AI SDK v4 (model-level wrapper)

To wrap individual models, you can use wrapAISDKModel with specific model instances.

trace-vercel-ai-sdk.ts
import { initLogger, wrapAISDKModel } from "braintrust"; import { openai } from "@ai-sdk/openai";   // `initLogger` sets up your code to log to the specified Braintrust project using your API key. // By default, all wrapped models will log to this project. If you don't call `initLogger`, then wrapping is a no-op, and you will not see spans in the UI. initLogger({  projectName: "My Project",  apiKey: process.env.BRAINTRUST_API_KEY, });   const model = wrapAISDKModel(openai.chat("gpt-3.5-turbo"));   async function main() {  // This will automatically log the request, response, and metrics to Braintrust  const response = await model.doGenerate({  inputFormat: "messages",  mode: {  type: "regular",  },  prompt: [  {  role: "user",  content: [{ type: "text", text: "What is the capital of France?" }],  },  ],  });  console.log(response); }   main();

Wrapping tools

Wrap tool implementations with wrapTraced. Here is a full example, modified from the Node.js Quickstart.

trace-vercel-ai-sdk-tools.ts
import { openai } from "@ai-sdk/openai"; import { CoreMessage, streamText, tool } from "ai"; import { z } from "zod"; import * as readline from "node:readline/promises"; import { initLogger, traced, wrapAISDKModel, wrapTraced } from "braintrust";   const logger = initLogger({  projectName: "<YOUR PROJECT NAME>",  apiKey: process.env.BRAINTRUST_API_KEY, });   const terminal = readline.createInterface({  input: process.stdin,  output: process.stdout, });   const messages: CoreMessage[] = [];   async function main() {  while (true) {  const userInput = await terminal.question("You: ");    await traced(async (span) => {  span.log({ input: userInput });  messages.push({ role: "user", content: userInput });    const result = streamText({  model: wrapAISDKModel(openai("gpt-4o")),  messages,  tools: {  weather: tool({  description: "Get the weather in a location (in Celsius)",  parameters: z.object({  location: z  .string()  .describe("The location to get the weather for"),  }),  execute: wrapTraced(  async function weather({ location }) {  return {  location,  temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C  };  },  {  type: "tool",  },  ),  }),  convertCelsiusToFahrenheit: tool({  description: "Convert a temperature from Celsius to Fahrenheit",  parameters: z.object({  celsius: z  .number()  .describe("The temperature in Celsius to convert"),  }),  execute: wrapTraced(  async function convertCelsiusToFahrenheit({ celsius }) {  const fahrenheit = (celsius * 9) / 5 + 32;  return { fahrenheit: Math.round(fahrenheit * 100) / 100 };  },  {  type: "tool",  },  ),  }),  },  maxSteps: 5,  onStepFinish: (step) => {  console.log(JSON.stringify(step, null, 2));  },  });    let fullResponse = "";  process.stdout.write("\nAssistant: ");  for await (const delta of result.textStream) {  fullResponse += delta;  process.stdout.write(delta);  }  process.stdout.write("\n\n");    messages.push({ role: "assistant", content: fullResponse });    span.log({ output: fullResponse });  });  } }   main().catch(console.error);

When you run this code, you'll see traces like this in the Braintrust UI:

AI SDK with tool calls

On this page

Vercel AI SDK - Docs - Braintrust