π€π A lightweight Ruby wrapper for the OpenAI API, designed for simplicity and ease of integration.
- API integration for chat completions and text completions
- Streaming capability for handling real-time response chunks
- Custom exception classes for different API error types
- Configurable timeout, retries and default parameters
- Complete test suite with mocked API responses
- Features
- Installation
- Quick Start
- Configuration
- Rails Integration
- Error Handling
- Current Capabilities
- Roadmap
- Development
- Contributing
- License
Add to your Gemfile:
gem 'chatgpt-ruby'
Or install directly:
$ gem install chatgpt-ruby
require 'chatgpt' # Initialize with API key client = ChatGPT::Client.new(ENV['OPENAI_API_KEY']) # Chat API (Recommended for GPT-3.5-turbo, GPT-4) response = client.chat([ { role: "user", content: "What is Ruby?" } ]) puts response.dig("choices", 0, "message", "content") # Completions API (For GPT-3.5-turbo-instruct) response = client.completions("What is Ruby?") puts response.dig("choices", 0, "text")
In a Rails application, create an initializer:
# config/initializers/chat_gpt.rb require 'chatgpt' ChatGPT.configure do |config| config.api_key = Rails.application.credentials.openai[:api_key] config.default_engine = 'gpt-3.5-turbo' config.request_timeout = 30 end
Then use it in your controllers or services:
# app/services/chat_gpt_service.rb class ChatGPTService def initialize @client = ChatGPT::Client.new end def ask_question(question) response = @client.chat([ { role: "user", content: question } ]) response.dig("choices", 0, "message", "content") end end # Usage in controller def show service = ChatGPTService.new @response = service.ask_question("Tell me about Ruby on Rails") end
ChatGPT.configure do |config| config.api_key = ENV['OPENAI_API_KEY'] config.api_version = 'v1' config.default_engine = 'gpt-3.5-turbo' config.request_timeout = 30 config.max_retries = 3 config.default_parameters = { max_tokens: 16, temperature: 0.5, top_p: 1.0, n: 1 } end
begin response = client.chat([ { role: "user", content: "Hello!" } ]) rescue ChatGPT::AuthenticationError => e puts "Authentication error: #{e.message}" rescue ChatGPT::RateLimitError => e puts "Rate limit hit: #{e.message}" rescue ChatGPT::InvalidRequestError => e puts "Bad request: #{e.message}" rescue ChatGPT::APIError => e puts "API error: #{e.message}" end
# Basic chat response = client.chat([ { role: "user", content: "What is Ruby?" } ]) # With streaming client.chat_stream([{ role: "user", content: "Tell me a story" }]) do |chunk| print chunk.dig("choices", 0, "delta", "content") end
# Basic completion with GPT-3.5-turbo-instruct response = client.completions("What is Ruby?") puts response.dig("choices", 0, "text")
While ChatGPT Ruby is functional, there are several areas planned for improvement:
- Response object wrapper & Rails integration with Railtie (v2.2)
- Token counting, function calling, and rate limiting (v2.3)
- Batch operations and async support (v3.0)
- DALL-E image generation and fine-tuning (Future)
β€οΈ Contributions in any of these areas are welcome!
# Run tests bundle exec rake test # Run linter bundle exec rubocop # Generate documentation bundle exec yard doc
- Fork it
- Create your feature branch (
git checkout -b feature/my-new-feature
) - Add tests for your feature
- Make your changes
- Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin feature/my-new-feature
) - Create a new Pull Request
Released under the MIT License. See LICENSE for details.