This project creates a real-time conversational AI, either serverless via SvelteKit/Static or using LangChain with FastAPI as a web server, streaming GPT model responses and supporting in-browser LLMs via webllm.
- Updated
Oct 27, 2024 - Svelte
This project creates a real-time conversational AI, either serverless via SvelteKit/Static or using LangChain with FastAPI as a web server, streaming GPT model responses and supporting in-browser LLMs via webllm.
AI app that runs llms in the browser supported by Svelte 5 and Skeleton 3. It also supports a RAG with PDF, DOCX, TXT, drag and drop, that is configurable with a subsystem. Best of all is that it leaks no data to a backend - it's completely private.
Add a description, image, and links to the webllm topic page so that developers can more easily learn about it.
To associate your repository with the webllm topic, visit your repo's landing page and select "manage topics."