LLM 如何流式传输回答

发布时间:2025 年 1 月 21 日

流式 LLM 回答由以增量方式持续发送的数据组成。 流式数据在服务器端和客户端看起来有所不同。

来自服务器

为了了解流式响应的显示方式,我使用命令行工具 curl 提示 Gemini 给我讲一个长笑话。请考虑以下对 Gemini API 的调用。如果您尝试使用此功能,请务必将网址中的 {GOOGLE_API_KEY} 替换为您的 Gemini API 密钥。

$ curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:streamGenerateContent?alt=sse&key={GOOGLE_API_KEY}" \  -H 'Content-Type: application/json' \  --no-buffer \  -d '{ "contents":[{"parts":[{"text": "Tell me a long T-rex joke, please."}]}]}' 

此请求会记录以下(截断的)输出,采用事件流格式。 每行都以 data: 开头,后跟消息载荷。具体格式实际上并不重要,重要的是文本块。

data: {  "candidates":[{  "content": {  "parts": [{"text": "A T-Rex"}],  "role": "model"  },  "finishReason": "STOP","index": 0,"safetyRatings": [  {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_HATE_SPEECH","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_HARASSMENT","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_DANGEROUS_CONTENT","probability": "NEGLIGIBLE"}]  }],  "usageMetadata": {"promptTokenCount": 11,"candidatesTokenCount": 4,"totalTokenCount": 15} } data: {  "candidates": [{  "content": {  "parts": [{ "text": " walks into a bar and orders a drink. As he sits there, he notices a" }],  "role": "model"  },  "finishReason": "STOP","index": 0,"safetyRatings": [  {"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_HATE_SPEECH","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_HARASSMENT","probability": "NEGLIGIBLE"},  {"category": "HARM_CATEGORY_DANGEROUS_CONTENT","probability": "NEGLIGIBLE"}]  }],  "usageMetadata": {"promptTokenCount": 11,"candidatesTokenCount": 21,"totalTokenCount": 32} } 
执行命令后,结果块会以流的形式传入。

第一个载荷是 JSON。仔细看看突出显示的 candidates[0].content.parts[0].text

{  "candidates": [  {  "content": {  "parts": [  {  "text": "A T-Rex"  }  ],  "role": "model"  },  "finishReason": "STOP",  "index": 0,  "safetyRatings": [  {  "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",  "probability": "NEGLIGIBLE"  },  {  "category": "HARM_CATEGORY_HATE_SPEECH",  "probability": "NEGLIGIBLE"  },  {  "category": "HARM_CATEGORY_HARASSMENT",  "probability": "NEGLIGIBLE"  },  {  "category": "HARM_CATEGORY_DANGEROUS_CONTENT",  "probability": "NEGLIGIBLE"  }  ]  }  ],  "usageMetadata": {  "promptTokenCount": 11,  "candidatesTokenCount": 4,  "totalTokenCount": 15  } } 

第一个 text 条目是 Gemini 回答的开头。当您提取更多 text 条目时,响应会以换行符分隔。

以下代码段显示了多个 text 条目,这些条目显示了模型的最终响应。

"A T-Rex" " was walking through the prehistoric jungle when he came across a group of Triceratops. " "\n\n\"Hey, Triceratops!\" the T-Rex roared. \"What are" " you guys doing?\"\n\nThe Triceratops, a bit nervous, mumbled, \"Just... just hanging out, you know? Relaxing.\"\n\n\"Well, you" " guys look pretty relaxed,\" the T-Rex said, eyeing them with a sly grin. \"Maybe you could give me a hand with something.\"\n\n\"A hand?\"" ... 

但如果您不让模型讲霸王龙笑话,而是让它做一些稍微复杂的事情,会发生什么情况呢?例如,让 Gemini 想出一个 JavaScript 函数来确定数字是偶数还是奇数。text: 代码块看起来略有不同。

输出现在包含 Markdown 格式,从 JavaScript 代码块开始。以下示例包含与之前相同的预处理步骤。

"```javascript\nfunction" " isEven(number) {\n // Check if the number is an integer.\n" " if (Number.isInteger(number)) {\n // Use the modulo operator" " (%) to check if the remainder after dividing by 2 is 0.\n return number % 2 === 0; \n } else {\n " "// Return false if the number is not an integer.\n return false;\n }\n}\n\n// Example usage:\nconsole.log(isEven(" "4)); // Output: true\nconsole.log(isEven(7)); // Output: false\nconsole.log(isEven(3.5)); // Output: false\n```\n\n**Explanation:**\n\n1. **`isEven(" "number)` function:**\n - Takes a single argument `number` representing the number to be checked.\n - Checks if the `number` is an integer using `Number.isInteger()`.\n - If it's an" ... 

更具挑战性的是,一些标记的项从一个块开始,在另一个块结束。部分标记是嵌套的。在以下示例中,突出显示的函数被拆分为两行:**isEven(number) function:**。合并后,输出为 **isEven("number) function:**。这意味着,如果您想输出格式化的 Markdown,就不能只使用 Markdown 解析器单独处理每个块。

来自客户

如果您在客户端上使用 MediaPipe LLM 等框架运行 Gemma 等模型,则流式数据会通过回调函数传入。

例如:

llmInference.generateResponse(  inputPrompt,  (chunk, done) => {  console.log(chunk); }); 

借助 Prompt API,您可以通过迭代 ReadableStream 以块的形式获取流式数据。

const languageModel = await LanguageModel.create(); const stream = languageModel.promptStreaming(inputPrompt); for await (const chunk of stream) {  console.log(chunk); } 

后续步骤

您是否想知道如何高效且安全地呈现流式数据?不妨阅读我们的大语言模型回答呈现最佳实践