PodBot is a specialized AI chatbot that provides personalized podcast recommendations and discusses all things podcasting. Built with modern web technologies and powered by OpenAI, it maintains conversation context across sessions using Redis Agent Memory Server.
-
Clone this repo:
-
Set up a .env file:
cp .env.example .env
- Add your OpenAI API key to
.env
:
OPENAI_API_KEY=your_openai_api_key_here
- Start all services:
docker compose up
- Open your browser to http://localhost:3000
That's it! Enter a username and start chatting with PodBot about podcasts.
- Enter a username and click "Load" to start or resume a conversation
- Ask about podcasts - anything from recommendations to industry discussion
- Get AI-powered responses with personalized suggestions based on your conversation history
- Clear your session anytime to start fresh
PodBot is built as a microservices architecture with four main components:
- Vite + TypeScript for fast development and type safety
- Nginx reverse proxy for efficient static serving and API routing
- Marked.js for markdown rendering of bot responses
- FontAwesome for modern UI icons
- Node.js + Express for the web server
- TypeScript for end-to-end type safety
- LangChain for LLM integration and message handling
- Clean architecture with adapters, services, and routes
- OpenAI GPT-4o-mini via LangChain for intelligent responses
- Redis Agent Memory Server (AMS) for persistent conversation context
- Smart context window management for efficient token usage
- Redis database for session storage and caching
- Docker Compose for orchestrating all services
graph LR A[Web UI<br/>Vite + TypeScript] --> B[Chat API<br/>Express + LangChain.js] B --> C[Agent Memory Server<br/>Python + FastAPI] C --> D[Redis<br/>Database] B --> E[OpenAI<br/>GPT-4o models] C --> E
- Podcast-Focused AI: Specialized chatbot that only discusses podcasts and recommendations
- Persistent Memory: Conversation history maintained across sessions
- Modern UI: Responsive chat interface with markdown support and loading states
- Type Safety: Full-stack TypeScript for reliable development
- Container Ready: Complete Docker setup for easy deployment
- Fast Performance: Vite for lightning-fast development and optimized builds
Test the backend directly with curl:
# Send a message curl -X POST http://localhost:3001/sessions/testuser \ -H "Content-Type: application/json" \ -d '{"message": "Recommend some true crime podcasts"}' # Get conversation history curl -X GET http://localhost:3001/sessions/testuser # Clear conversation curl -X DELETE http://localhost:3001/sessions/testuser
# Backend development cd chat-api && npm run dev # Frontend development cd chat-web && npm run dev # View logs docker compose logs -f chat-api docker compose logs -f agent-memory-server
OPENAI_API_KEY
- Your OpenAI API key (get one here)
AMS_CONTEXT_WINDOW_MAX
- Token limit for context window (default: 4000)PORT
- Chat API server port (default: 3001)AUTH_MODE
- AMS authentication mode (default: disabled)LOG_LEVEL
- AMS logging level (default: DEBUG)
The application runs as four containerized services:
Service | Port | Description |
---|---|---|
chat-web | 3000 | Frontend web interface (Nginx + Vite build) |
chat-api | 3001 | Backend API server (Node.js + Express) |
agent-memory-server | 8000 | Memory management service (Python + FastAPI) |
redis | 6379 | Database for session storage |
# Rebuild and restart services docker compose up --build # Run in background docker compose up -d # Stop all services docker compose down # View all logs docker compose logs -f
For detailed implementation information, see CLAUDE.md