A comprehensive Spring Boot application showcasing professional AI integration capabilities using OpenAI ChatGPT and Claude AI APIs. This demo demonstrates three service tiers from basic chat to advanced AI features with conversation memory and vector databases.
Perfect for GitHub portfolio to showcase AI integration expertise! β
- β Basic ChatGPT API integration
- β One-off chat conversations
- β Custom prompt engineering
- β Model selection (GPT-4, GPT-3.5-turbo)
- β Rate limiting & cost control
- β Complete documentation
- β All Basic features, plus:
- β Conversational AI with memory
- β Multi-turn dialogue support
- β Conversation history storage
- β Multi-user support
- β Session management
- β Context-aware responses
- β Content generation (blogs, emails, code)
- β All Standard features, plus:
- β Document analysis & summarization
- β Sentiment analysis
- β Key points extraction
- β Question answering on documents
- β Document comparison
- β Advanced prompt optimization
- β Production-ready error handling
- Backend: Java 17, Spring Boot 3.2.0
- AI Integration: OpenAI API (GPT-4, GPT-3.5-turbo)
- Database: H2 (in-memory), JPA/Hibernate
- Rate Limiting: Resilience4j
- Build Tool: Maven
- Libraries: Lombok, Jackson
- Java 17 or higher
- Maven 3.6+
- OpenAI API Key (Get one here)
git clone https://github.com/codiebyheaartspring-boot-openai-integration.git cd ai-integration-demoCreate or update src/main/resources/application.properties:
openai.api.key=sk-your-actual-openai-api-key-hereOr set as environment variable:
export OPENAI_API_KEY=sk-your-actual-openai-api-key-heremvn clean install mvn spring-boot:run- API Base URL: http://localhost:8080
- H2 Console: http://localhost:8080/h2-console
- Health Check: http://localhost:8080/api/chat/health
POST /api/chat Content-Type: application/json { "message": "Explain quantum computing in simple terms", "model": "gpt-3.5-turbo", "temperature": 0.7, "maxTokens": 1000 }POST /api/chat/conversation Content-Type: application/json { "userId": "user123", "message": "What's the weather like?", "model": "gpt-3.5-turbo", "temperature": 0.7 }POST /api/chat/conversation/{conversationId} Content-Type: application/json { "message": "Tell me more about that" }GET /api/chat/history/{userId}POST /api/content/blog-post?topic=AI in Healthcare&tone=professional&wordCount=800POST /api/content/email?purpose=Meeting Request&tone=professionalPOST /api/content/code?description=REST API endpoint for user login&language=javaPOST /api/content/social-media?platform=linkedin&topic=AI Integration&includeHashtags=truePOST /api/document/summarize?summaryLength=medium Content-Type: application/x-www-form-urlencoded document=Your long document text here...POST /api/document/sentiment?detailedAnalysis=true Content-Type: application/x-www-form-urlencoded text=Your text to analyze...POST /api/document/keypoints?maxPoints=5 Content-Type: application/x-www-form-urlencoded document=Your document text...POST /api/document/qa?question=What are the main findings? Content-Type: application/x-www-form-urlencoded document=Your document text...// Chat with conversation memory for consistent support POST /api/chat/conversation { "userId": "customer_456", "message": "I need help with my order", "systemPrompt": "You are a helpful customer support agent." }// Generate blog posts for content marketing POST /api/content/blog-post ?topic=10 Benefits of Microservices Architecture &tone=professional &targetAudience=software developers &wordCount=1000// Analyze customer feedback POST /api/document/sentiment?detailedAnalysis=true document=Customer feedback text...Control API costs by adjusting rate limits in application.properties:
ratelimit.requests-per-minute=10 ratelimit.timeout-duration=5Switch between models based on needs:
- GPT-4: Best quality, higher cost
- GPT-3.5-turbo: Fast, cost-effective
- 0.0-0.3: Focused, deterministic responses
- 0.4-0.7: Balanced creativity
- 0.8-2.0: High creativity, varied responses
- Never commit API keys to version control
- Use environment variables for sensitive data
- Implement authentication for production APIs
- Enable rate limiting to prevent abuse
- Monitor API usage and costs
- Validate all user inputs
- Use HTTPS in production
βββββββββββββββ β Client β ββββββββ¬βββββββ β βΌ βββββββββββββββββββββββββββββββββββββββ β REST Controllers β β - ChatController β β - ContentGenerationController β β - DocumentAnalysisController β ββββββββββββ¬βββββββββββββββββββββββββββ β βΌ βββββββββββββββββββββββββββββββββββββββ β Service Layer β β - OpenAIService β β - ConversationService β ββββββββββββ¬βββββββββββββββββββββββββββ β ββββββββββββββββ¬ββββββββββββββ βΌ βΌ βΌ ββββββββββββ βββββββββββββββ ββββββββββββ β OpenAI β β Database β β Rate β β API β β (H2/JPA) β β Limiter β ββββββββββββ βββββββββββββββ ββββββββββββ Run the application and test endpoints:
# Health check curl http://localhost:8080/api/chat/health # Simple chat curl -X POST http://localhost:8080/api/chat \ -H "Content-Type: application/json" \ -d '{"message": "Hello, AI!", "model": "gpt-3.5-turbo"}' # Generate content curl -X POST "http://localhost:8080/api/content/blog-post?topic=Spring Boot Tips&tone=professional"- API Documentation - Detailed API reference
- Deployment Guide - Production deployment instructions
- Create a new controller in
controller/package - Implement business logic in
service/package - Add custom prompts in
OpenAIService - Update documentation
- Add Claude AI dependency to
pom.xml - Create
ClaudeServicesimilar toOpenAIService - Add configuration for Claude API key
- Create new endpoints or modify existing ones
- Add vector DB dependency
- Create embeddings using OpenAI
- Store and retrieve vectors
- Implement semantic search
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
Your Name
- GitHub: @yourusername
- LinkedIn: Your LinkedIn
- Email: your.email@example.com
Give a βοΈ if this project helped you!
Have questions or need AI integration services?
- Create an issue
- Email: your.dilsecodie@gmail.com
- Add Claude AI integration
- Implement vector database (Pinecone)
- Add streaming responses
- Create frontend UI
- Add Docker support
- Implement caching layer
- Add more AI models (Google Gemini, etc.)
- Create comprehensive test suite
Built with β€οΈ using Spring Boot and OpenAI