Here we have stream coming from Open AI. We need to read chunks and stream it back as soon as chunk is ready. In a way we are streaming a stream.
One can make a case that this is not needed, however I am doing this in order to hide API keys for Groq, Open AI and Hygraph headless CMS. As I do not want them to be exposed to the client.
Route handler for /api/ask
route:
export const route = async (req: Request, res: Response) => { const { question } = req.query; if (typeof question !== "string") { return res .status(400) .send(`Client error: question query not type of string`); } try { const answerSource = await getAnswerSource(); const answerChunks = await getAnswerChunks(answerSource, question, false); for await (const chunk of answerChunks) { console.log(chunk.response); if (req.closed) { res.end(); return; } res.write(chunk.response); } } catch (error) { return res.status(500).send(`Server error: ${error}`); } res.end(); };
Let's dissect it a bit:
const answerSource = await getAnswerSource();
Here we are getting all our relevant data, CV, blog posts plus any pages you would like your AI clone to be aware of.
const answerChunks = await getAnswerChunks(answerSource, question, false);
Here we are feeding out LLM with our custom data and question. And getting stream back as a response.
for await (const chunk of answerChunks) { console.log(chunk.response); if (req.closed) { res.end(); return; } res.write(chunk.response); }
Here we simply iterate over chunks and make sure we close connection in case client do so as well.
And our streamception is done! π
β€οΈIf you would like to stay it touch please feel free to connectβ€οΈ
Top comments (0)