使用 Gemini API,您可以构建多轮自由对话。Firebase AI Logic SDK 通过管理对话状态来简化流程,因此与 generateContent()(或 generateContentStream())不同,您无需自行存储对话记录。
跳转到纯文本聊天代码 跳转到迭代式图片编辑代码 跳转到流式响应代码
准备工作
| 点击您的 Gemini API 提供商,以查看此页面上特定于提供商的内容和代码。 |
如果您尚未完成入门指南,请先完成该指南。该指南介绍了如何设置 Firebase 项目、将应用连接到 Firebase、添加 SDK、为所选的 Gemini API 提供程序初始化后端服务,以及创建 GenerativeModel 实例。
如需测试和迭代提示,我们建议使用 Google AI Studio。
打造纯文本聊天体验
| 在试用此示例之前,请完成本指南的准备工作部分,以设置您的项目和应用。 在该部分中,您还需要点击所选Gemini API提供商对应的按钮,以便在此页面上看到特定于提供商的内容。 |
如需构建多轮对话(例如聊天),请先通过调用 startChat() 初始化聊天。然后使用 sendMessage() 发送新的用户消息,该消息也会附加到聊天记录中,同时附加的还有回答。
对于对话中的内容,role 有两种可能的选项:
user:提供提示的角色。此值是对sendMessage()的调用的默认值,如果传递了其他角色,该函数会抛出异常。model:提供回答的角色。当使用现有history调用startChat()时,可以使用此角色。
Swift
您可以调用 startChat() 和 sendMessage() 来发送新用户消息:
import FirebaseAILogic // Initialize the Gemini Developer API backend service let ai = FirebaseAI.firebaseAI(backend: .googleAI()) // Create a `GenerativeModel` instance with a model that supports your use case let model = ai.generativeModel(modelName: "gemini-2.5-flash") // Optionally specify existing chat history let history = [ ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."), ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"), ] // Initialize the chat with optional chat history let chat = model.startChat(history: history) // To generate text output, call sendMessage and pass in the message let response = try await chat.sendMessage("How many paws are in my house?") print(response.text ?? "No text in response.") Kotlin
您可以调用 startChat() 和 sendMessage() 来发送新用户消息:
// Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a model that supports your use case val model = Firebase.ai(backend = GenerativeBackend.googleAI()) .generativeModel("gemini-2.5-flash") // Initialize the chat val chat = model.startChat( history = listOf( content(role = "user") { text("Hello, I have 2 dogs in my house.") }, content(role = "model") { text("Great to meet you. What would you like to know?") } ) ) val response = chat.sendMessage("How many paws are in my house?") print(response.text) Java
您可以调用 startChat() 和 sendMessage() 来发送新用户消息:
ListenableFuture。 // Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a model that supports your use case GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()) .generativeModel("gemini-2.5-flash"); // Use the GenerativeModelFutures Java compatibility layer which offers // support for ListenableFuture and Publisher APIs GenerativeModelFutures model = GenerativeModelFutures.from(ai); // (optional) Create previous chat history for context Content.Builder userContentBuilder = new Content.Builder(); userContentBuilder.setRole("user"); userContentBuilder.addText("Hello, I have 2 dogs in my house."); Content userContent = userContentBuilder.build(); Content.Builder modelContentBuilder = new Content.Builder(); modelContentBuilder.setRole("model"); modelContentBuilder.addText("Great to meet you. What would you like to know?"); Content modelContent = userContentBuilder.build(); List<Content> history = Arrays.asList(userContent, modelContent); // Initialize the chat ChatFutures chat = model.startChat(history); // Create a new user message Content.Builder messageBuilder = new Content.Builder(); messageBuilder.setRole("user"); messageBuilder.addText("How many paws are in my house?"); Content message = messageBuilder.build(); // Send the message ListenableFuture<GenerateContentResponse> response = chat.sendMessage(message); Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() { @Override public void onSuccess(GenerateContentResponse result) { String resultText = result.getText(); System.out.println(resultText); } @Override public void onFailure(Throwable t) { t.printStackTrace(); } }, executor); Web
您可以调用 startChat() 和 sendMessage() 来发送新用户消息:
import { initializeApp } from "firebase/app"; import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai"; // TODO(developer) Replace the following with your app's Firebase configuration // See: https://firebase.google.com/docs/web/learn-more#config-object const firebaseConfig = { // ... }; // Initialize FirebaseApp const firebaseApp = initializeApp(firebaseConfig); // Initialize the Gemini Developer API backend service const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() }); // Create a `GenerativeModel` instance with a model that supports your use case const model = getGenerativeModel(ai, { model: "gemini-2.5-flash" }); async function run() { const chat = model.startChat({ history: [ { role: "user", parts: [{ text: "Hello, I have 2 dogs in my house." }], }, { role: "model", parts: [{ text: "Great to meet you. What would you like to know?" }], }, ], generationConfig: { maxOutputTokens: 100, }, }); const msg = "How many paws are in my house?"; const result = await chat.sendMessage(msg); const text = result.response.text(); console.log(text); } run(); Dart
您可以调用 startChat() 和 sendMessage() 来发送新用户消息:
import 'package:firebase_ai/firebase_ai.dart'; import 'package:firebase_core/firebase_core.dart'; import 'firebase_options.dart'; // Initialize FirebaseApp await Firebase.initializeApp( options: DefaultFirebaseOptions.currentPlatform, ); // Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a model that supports your use case final model = FirebaseAI.googleAI().generativeModel(model: 'gemini-2.5-flash'); final chat = model.startChat(); // Provide a prompt that contains text final prompt = [Content.text('Write a story about a magic backpack.')]; final response = await chat.sendMessage(prompt); print(response.text); Unity
您可以调用 StartChat() 和 SendMessageAsync() 来发送新用户消息:
using Firebase; using Firebase.AI; // Initialize the Gemini Developer API backend service var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()); // Create a `GenerativeModel` instance with a model that supports your use case var model = ai.GetGenerativeModel(modelName: "gemini-2.5-flash"); // Optionally specify existing chat history var history = new [] { ModelContent.Text("Hello, I have 2 dogs in my house."), new ModelContent("model", new ModelContent.TextPart("Great to meet you. What would you like to know?")), }; // Initialize the chat with optional chat history var chat = model.StartChat(history); // To generate text output, call SendMessageAsync and pass in the message var response = await chat.SendMessageAsync("How many paws are in my house?"); UnityEngine.Debug.Log(response.Text ?? "No text in response."); 了解如何选择适合您的应用场景和应用的模型 。
使用多轮对话迭代和修改图片
| 在试用此示例之前,请完成本指南的准备工作部分,以设置您的项目和应用。 在该部分中,您还需要点击所选Gemini API提供商对应的按钮,以便在此页面上看到特定于提供商的内容。 |
通过多轮对话,您可以与 Gemini 模型就其生成的图片或您提供的图片进行迭代。
请务必创建 GenerativeModel 实例,在模型配置中添加 responseModalities: ["TEXT", "IMAGE"]startChat() 和 sendMessage() 来发送新用户消息。
Swift
import FirebaseAILogic // Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a Gemini model that supports image output let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel( modelName: "gemini-2.5-flash-image", // Configure the model to respond with text and images (required) generationConfig: GenerationConfig(responseModalities: [.text, .image]) ) // Initialize the chat let chat = model.startChat() guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") } // Provide an initial text prompt instructing the model to edit the image let prompt = "Edit this image to make it look like a cartoon" // To generate an initial response, send a user message with the image and text prompt let response = try await chat.sendMessage(image, prompt) // Inspect the generated image guard let inlineDataPart = response.inlineDataParts.first else { fatalError("No image data in response.") } guard let uiImage = UIImage(data: inlineDataPart.data) else { fatalError("Failed to convert data to UIImage.") } // Follow up requests do not need to specify the image again let followUpResponse = try await chat.sendMessage("But make it old-school line drawing style") // Inspect the edited image after the follow up request guard let followUpInlineDataPart = followUpResponse.inlineDataParts.first else { fatalError("No image data in response.") } guard let followUpUIImage = UIImage(data: followUpInlineDataPart.data) else { fatalError("Failed to convert data to UIImage.") } Kotlin
// Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a Gemini model that supports image output val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image", // Configure the model to respond with text and images (required) generationConfig = generationConfig { responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) } ) // Provide an image for the model to edit val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones) // Create the initial prompt instructing the model to edit the image val prompt = content { image(bitmap) text("Edit this image to make it look like a cartoon") } // Initialize the chat val chat = model.startChat() // To generate an initial response, send a user message with the image and text prompt var response = chat.sendMessage(prompt) // Inspect the returned image var generatedImageAsBitmap = response .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image // Follow up requests do not need to specify the image again response = chat.sendMessage("But make it old-school line drawing style") generatedImageAsBitmap = response .candidates.first().content.parts.filterIsInstance<ImagePart>().firstOrNull()?.image Java
// Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a Gemini model that supports image output GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel( "gemini-2.5-flash-image", // Configure the model to respond with text and images (required) new GenerationConfig.Builder() .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE)) .build() ); GenerativeModelFutures model = GenerativeModelFutures.from(ai); // Provide an image for the model to edit Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones); // Initialize the chat ChatFutures chat = model.startChat(); // Create the initial prompt instructing the model to edit the image Content prompt = new Content.Builder() .setRole("user") .addImage(bitmap) .addText("Edit this image to make it look like a cartoon") .build(); // To generate an initial response, send a user message with the image and text prompt ListenableFuture<GenerateContentResponse> response = chat.sendMessage(prompt); // Extract the image from the initial response ListenableFuture<@Nullable Bitmap> initialRequest = Futures.transform(response, result -> { for (Part part : result.getCandidates().get(0).getContent().getParts()) { if (part instanceof ImagePart) { ImagePart imagePart = (ImagePart) part; return imagePart.getImage(); } } return null; }, executor); // Follow up requests do not need to specify the image again ListenableFuture<GenerateContentResponse> modelResponseFuture = Futures.transformAsync( initialRequest, generatedImage -> { Content followUpPrompt = new Content.Builder() .addText("But make it old-school line drawing style") .build(); return chat.sendMessage(followUpPrompt); }, executor); // Add a final callback to check the reworked image Futures.addCallback(modelResponseFuture, new FutureCallback<GenerateContentResponse>() { @Override public void onSuccess(GenerateContentResponse result) { for (Part part : result.getCandidates().get(0).getContent().getParts()) { if (part instanceof ImagePart) { ImagePart imagePart = (ImagePart) part; Bitmap generatedImageAsBitmap = imagePart.getImage(); break; } } } @Override public void onFailure(Throwable t) { t.printStackTrace(); } }, executor); Web
import { initializeApp } from "firebase/app"; import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai"; // TODO(developer) Replace the following with your app's Firebase configuration // See: https://firebase.google.com/docs/web/learn-more#config-object const firebaseConfig = { // ... }; // Initialize FirebaseApp const firebaseApp = initializeApp(firebaseConfig); // Initialize the Gemini Developer API backend service const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() }); // Create a `GenerativeModel` instance with a model that supports your use case const model = getGenerativeModel(ai, { model: "gemini-2.5-flash-image", // Configure the model to respond with text and images (required) generationConfig: { responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE], }, }); // Prepare an image for the model to edit async function fileToGenerativePart(file) { const base64EncodedDataPromise = new Promise((resolve) => { const reader = new FileReader(); reader.onloadend = () => resolve(reader.result.split(',')[1]); reader.readAsDataURL(file); }); return { inlineData: { data: await base64EncodedDataPromise, mimeType: file.type }, }; } const fileInputEl = document.querySelector("input[type=file]"); const imagePart = await fileToGenerativePart(fileInputEl.files[0]); // Provide an initial text prompt instructing the model to edit the image const prompt = "Edit this image to make it look like a cartoon"; // Initialize the chat const chat = model.startChat(); // To generate an initial response, send a user message with the image and text prompt const result = await chat.sendMessage([prompt, imagePart]); // Request and inspect the generated image try { const inlineDataParts = result.response.inlineDataParts(); if (inlineDataParts?.[0]) { // Inspect the generated image const image = inlineDataParts[0].inlineData; console.log(image.mimeType, image.data); } } catch (err) { console.error('Prompt or candidate was blocked:', err); } // Follow up requests do not need to specify the image again const followUpResult = await chat.sendMessage("But make it old-school line drawing style"); // Request and inspect the returned image try { const followUpInlineDataParts = followUpResult.response.inlineDataParts(); if (followUpInlineDataParts?.[0]) { // Inspect the generated image const followUpImage = followUpInlineDataParts[0].inlineData; console.log(followUpImage.mimeType, followUpImage.data); } } catch (err) { console.error('Prompt or candidate was blocked:', err); } Dart
import 'package:firebase_ai/firebase_ai.dart'; import 'package:firebase_core/firebase_core.dart'; import 'firebase_options.dart'; await Firebase.initializeApp( options: DefaultFirebaseOptions.currentPlatform, ); // Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a Gemini model that supports image output final model = FirebaseAI.googleAI().generativeModel( model: 'gemini-2.5-flash-image', // Configure the model to respond with text and images (required) generationConfig: GenerationConfig(responseModalities: [ResponseModalities.text, ResponseModalities.image]), ); // Prepare an image for the model to edit final image = await File('scones.jpg').readAsBytes(); final imagePart = InlineDataPart('image/jpeg', image); // Provide an initial text prompt instructing the model to edit the image final prompt = TextPart("Edit this image to make it look like a cartoon"); // Initialize the chat final chat = model.startChat(); // To generate an initial response, send a user message with the image and text prompt final response = await chat.sendMessage([ Content.multi([prompt,imagePart]) ]); // Inspect the returned image if (response.inlineDataParts.isNotEmpty) { final imageBytes = response.inlineDataParts[0].bytes; // Process the image } else { // Handle the case where no images were generated print('Error: No images were generated.'); } // Follow up requests do not need to specify the image again final followUpResponse = await chat.sendMessage([ Content.text("But make it old-school line drawing style") ]); // Inspect the returned image if (followUpResponse.inlineDataParts.isNotEmpty) { final followUpImageBytes = response.inlineDataParts[0].bytes; // Process the image } else { // Handle the case where no images were generated print('Error: No images were generated.'); } Unity
using Firebase; using Firebase.AI; // Initialize the Gemini Developer API backend service // Create a `GenerativeModel` instance with a Gemini model that supports image output var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel( modelName: "gemini-2.5-flash-image", // Configure the model to respond with text and images (required) generationConfig: new GenerationConfig( responseModalities: new[] { ResponseModality.Text, ResponseModality.Image }) ); // Prepare an image for the model to edit var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine( UnityEngine.Application.streamingAssetsPath, "scones.jpg")); var image = ModelContent.InlineData("image/jpeg", imageFile); // Provide an initial text prompt instructing the model to edit the image var prompt = ModelContent.Text("Edit this image to make it look like a cartoon."); // Initialize the chat var chat = model.StartChat(); // To generate an initial response, send a user message with the image and text prompt var response = await chat.SendMessageAsync(new [] { prompt, image }); // Inspect the returned image var imageParts = response.Candidates.First().Content.Parts .OfType<ModelContent.InlineDataPart>() .Where(part => part.MimeType == "image/png"); // Load the image into a Unity Texture2D object UnityEngine.Texture2D texture2D = new(2, 2); if (texture2D.LoadImage(imageParts.First().Data.ToArray())) { // Do something with the image } // Follow up requests do not need to specify the image again var followUpResponse = await chat.SendMessageAsync("But make it old-school line drawing style"); // Inspect the returned image var followUpImageParts = followUpResponse.Candidates.First().Content.Parts .OfType<ModelContent.InlineDataPart>() .Where(part => part.MimeType == "image/png"); // Load the image into a Unity Texture2D object UnityEngine.Texture2D followUpTexture2D = new(2, 2); if (followUpTexture2D.LoadImage(followUpImageParts.First().Data.ToArray())) { // Do something with the image } 以流式传输回答
| 在试用此示例之前,请完成本指南的准备工作部分,以设置您的项目和应用。 在该部分中,您还需要点击所选Gemini API提供商对应的按钮,以便在此页面上看到特定于提供商的内容。 |
您可以不等待模型生成完整结果,而是使用流式传输来处理部分结果,从而实现更快的互动。如需流式传输响应,请调用 sendMessageStream()。
您还可以做些什么?
- 了解如何在向模型发送长提示之前计算 token 数。
- 设置 Cloud Storage for Firebase,以便您可以在多模态请求中包含大型文件,并获得更易于管理的解决方案,在提示中提供文件。 文件可以包括图片、PDF、视频和音频。
- 开始考虑为生产做准备(请参阅生产核对清单),包括:
- 设置 Firebase App Check 以保护 Gemini API 免遭未经授权的客户端滥用。
- 集成 Firebase Remote Config 以更新应用中的值(例如模型名称),而无需发布新的应用版本。
试用其他功能
- 根据纯文本提示生成文本。
- 通过各种文件类型(例如图片、PDF、视频和音频)提示生成文本。
- 根据文本提示和多模态提示生成结构化输出(例如 JSON)。
- 根据文本提示生成图片(Gemini 或 Imagen)。
- 使用工具(例如函数调用和基于 Google 搜索的接地)将 Gemini 模型连接到应用的其余部分以及外部系统和信息。
了解如何控制内容生成
- 了解提示设计,包括最佳实践、策略和示例提示。
- 配置模型参数,例如温度和输出 token 数上限(对于 Gemini)或宽高比和人物生成(对于 Imagen)。
- 使用安全设置来调整获得可能被视为有害的回答的可能性。
详细了解支持的型号
了解适用于各种应用场景的模型及其配额和价格。就您使用 Firebase AI Logic 的体验提供反馈