Vertex AI Node.js SDK

The Vertex AI Node.js SDK enables developers to use Google's state-of-the-art generative AI models (like Gemini) to build AI-powered features and applications.

See here for detailed samples using the Vertex AI Node.js SDK.

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the Vertex AI API.
  4. Set up authentication with a service account so you can access the API from your local workstation.

Installation

Install this SDK via NPM.

npm install @google-cloud/vertexai 

Setup

To use the SDK, create an instance of VertexAI by passing it your Google Cloud project ID and location. Then create a reference to a generative model.

const {VertexAI, HarmCategory, HarmBlockThreshold} = require('@google-cloud/vertexai'); const project = 'your-cloud-project'; const location = 'us-central1'; const vertex_ai = new VertexAI({project: project, location: location}); // Instantiate models const generativeModel = vertex_ai.preview.getGenerativeModel({ model: 'gemini-pro', // The following parameters are optional // They can also be passed to individual content generation requests safety_settings: [{category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE}], generation_config: {max_output_tokens: 256}, }); const generativeVisionModel = vertex_ai.preview.getGenerativeModel({ model: 'gemini-pro-vision', }); 

Streaming content generation

async function streamGenerateContent() { const request = { contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}], }; const streamingResp = await generativeModel.generateContentStream(request); for await (const item of streamingResp.stream) { console.log('stream chunk: ', JSON.stringify(item)); } console.log('aggregated response: ', JSON.stringify(await streamingResp.response)); }; streamGenerateContent(); 

Streaming chat

async function streamChat() { const chat = generativeModel.startChat(); const chatInput1 = "How can I learn more about Node.js?"; const result1 = await chat.sendMessageStream(chatInput1); for await (const item of result1.stream) { console.log(item.candidates[0].content.parts[0].text); } console.log('aggregated response: ', JSON.stringify(await result1.response)); } streamChat(); 

Multi-part content generation

Providing a Google Cloud Storage image URI

async function multiPartContent() { const filePart = {file_data: {file_uri: "gs://generativeai-downloads/images/scones.jpg", mime_type: "image/jpeg"}}; const textPart = {text: 'What is this a picture of?'}; const request = { contents: [{role: 'user', parts: [textPart, filePart]}], }; const streamingResp = await generativeVisionModel.generateContentStream(request); for await (const item of streamingResp.stream) { console.log('stream chunk: ', JSON.stringify(item)); } const aggregatedResponse = await streamingResp.response; console.log(aggregatedResponse.candidates[0].content); } multiPartContent(); 

Providing a base64 image string

async function multiPartContentImageString() { // Replace this with your own base64 image string const base64Image = 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8BQDwAEhQGAhKmMIQAAAABJRU5ErkJggg=='; const filePart = {inline_data: {data: base64Image, mime_type: 'image/jpeg'}}; const textPart = {text: 'What is this a picture of?'}; const request = { contents: [{role: 'user', parts: [textPart, filePart]}], }; const resp = await generativeVisionModel.generateContentStream(request); const contentResponse = await resp.response; console.log(contentResponse.candidates[0].content.parts[0].text); } multiPartContentImageString(); 

Multi-part content with text and video

async function multiPartContentVideo() { const filePart = {file_data: {file_uri: 'gs://cloud-samples-data/video/animals.mp4', mime_type: 'video/mp4'}}; const textPart = {text: 'What is in the video?'}; const request = { contents: [{role: 'user', parts: [textPart, filePart]}], }; const streamingResp = await generativeVisionModel.generateContentStream(request); for await (const item of streamingResp.stream) { console.log('stream chunk: ', JSON.stringify(item)); } const aggregatedResponse = await streamingResp.response; console.log(aggregatedResponse.candidates[0].content); } multiPartContentVideo(); 

Content generation: non-streaming

async function generateContent() { const request = { contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}], }; const resp = await generativeModel.generateContent(request); console.log('aggregated response: ', JSON.stringify(await resp.response)); }; generateContent(); 

Counting tokens

async function countTokens() { const request = { contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}], }; const resp = await generativeModel.countTokens(request); console.log('count tokens response: ', resp); } countTokens(); 

License

The contents of this repository are licensed under the Apache License, version 2.0.