Skip to content

davidtkeane/openai-python

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Welcome to the OpenAI Python API library.

Β 


The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.7+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by httpx. It is generated from our OpenAPI specification with Stainless.

The OpenAI Python library is like a toolbox that makes it easy to use OpenAI Models in your Python programs. Imagine you have a smart robot assistant that can help you with various tasks like answering questions or generating text. This library helps you communicate with that robot using a set of rules API's over the internet.

Β 

Click for The Table of Contents

Documentation


OpenAI Python API Introduction

Github Documentation

The Github OpenAI documentation is a user manual. It has all the instructions and information you need to use OpenAI's AI models in a Python script. Think of this as a detailed guide that shows you how to communicate with your smart robot assistant. This README.md file will teach you how to use the tools provided by OpenAI to be able to then use them with your own project.

The words in Blue are here to help new students learn the technical words, and there is a Quick Definitions section at the end of this document to explain the technical term as this helps to understand and learn faster. To make things easier for new students, there are instructions for each tool from the table of contents and a script to run made for your convince.

OpenAi Documentation

Alternatively The QuickStart guide is a quick and easy way to get started with OpenAI's AI models in your Python programs. It provides step-by-step instructions on how to set up your development environment, send your first API request, and use the OpenAI API to generate text, images, and more. This guide is perfect for developers and programmers who want to quickly get started with OpenAI's AI models in their Python programs.

There is more information available in the OpenAI Documentation that provides a comprehensive overview of OpenAI's AI models and how to use them in your Python programs. This documentation includes detailed information on the different AI models available, how to set up your development environment, and how to use the OpenAI API to generate text, images, and more.

The REST API documentation can be found on platform.openai.com. The full set of API's used from the library can be found in api.md.

The OpenAI Python client is a tool that allows you to use artificial intelligence (AI) in your Python programs. It's like a bridge between your code and the powerful AI models created by OpenAI.

Β 

More OpenAI API Tools

Full documentation for all the API's available are at api.md

What Can You Do with It?

With the OpenAI Python client, you can create all sorts of cool things! Here are some examples:

Text Generation

You can use the client to generate human-like text, such as stories, articles, or even code! Imagine writing a short story and having the AI help you finish it.

Chatbots

Create chatbots that can have conversations with people. These chatbots can be used for customer service, entertainment, or even as virtual friends!

Image Generation

The client can generate images based on text descriptions. For example, ask it to create an image of a cute puppy playing with a ball, and it will generate one for you!

Audio Processing

Transcribe audio recordings into text. This is useful for creating subtitles for videos or transcripts of interviews.

Moderation

The client can help moderate content, such as comments on a website or messages in a chat app. It can detect things like profanity, hate speech, or explicit content.

Β 

Important

The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide] ( openai#742), which includes scripts to automatically update your code.

Β 

What is an SDK?

What is an SDK?:

An SDK (Software Development Kit) is like a toolkit for developers. It has all the tools and pieces they need to build software that can talk to other software, like OpenAI's AI models. Think of it like a set of LEGO pieces and instructions to build something cool.

Imagine you have a favorite app on your phone that just got a big update. This message is kind of like a heads-up about a major update to a piece of software that developers use to talk to OpenAI's AI models. Let's break it down step by step:

What Happened?:

  1. Rewritten SDK in v1: The toolkit (SDK) got a major overhaul or rewrite. This means they changed a lot of things in how it works. This big change was released on November 6th, 2023, and they call this new version "v1" (version 1).
  2. Why is this Important? If developers were using the old version of the toolkit, they need to know that things have changed. The way they wrote their code to talk to the AI might not work the same way anymore. They need to update their code to match the new toolkit.

What Should You Do?:

  • Check the v1 Migration Guide: There is a special guide called the "v1 migration guide." It’s like a how-to manual that helps you move your old stuff (code) to work with the new version of the toolkit.
  • Location of the Guide: You can find this guide at this link: v1 migration guide. This link takes you to a place where they explain all the changes and even give you scripts (small programs) to help you automatically update your code.

Why Scripts?:

Scripts are like little helpers that can automatically do tasks for you. Instead of you manually changing every piece of your code to work with the new toolkit, these scripts do it for you quickly and correctly.

Summary:

So, to sum it up:

  • The toolkit (SDK) got a big update on November 6th, 2023.
  • If you're using the old version, you need to update your code.
  • There's a guide with instructions and helpful scripts to make this update easier.
  • You can find all this information in the migration guide at the given link.

Β 

Installation:


OpenAI Python API Introduction

Requirements:

Openai and Python can be install on πŸ–₯️

Python Linux Windows Apple Android

The basic requirements to use Openai is that you are using Python 3.7 version or higher like Python 3.11 and some sort of computer. It's like needing a specific version of an app to use certain features.

Check List:

  1. Python 3.7 or higher
  2. A computer with an internet connection
  3. An OpenAI account
  4. An OpenAI API key
  5. A text editor or IDE (Integrated Development Environment)
  6. A terminal or command prompt
  7. A basic understanding of Python programming language and its syntax and concepts like variables, functions, loops, and conditional statements. You can learn more about Python programming language by visiting the Python Documentation.

Β 

Installation Guide for Windows, macOs and Linux

How to obtain your OpenAi API Key:

To obtain an OpenAI API key, follow these steps: Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable (.env file) or key management service.

  1. Sign Up: Go to the OpenAI website and sign up for an account if you don't already have one.
  2. Login: Once logged in, navigate to the API section.
  3. Generate Key: Click on "API keys" and then "Create API Key." A new key will be generated and displayed.
  4. Save the Key: Copy and securely store the API key. You will use this key to authenticate your requests to the OpenAI API.

To install Python on your computer, follow these steps:

Windows: πŸͺŸ

  1. Download Installer: Go to the official Python website and download the latest installer like Python 3.11.

  2. Run Installer: Run the downloaded file. NOTICE !!! Check the box for "Add Python to PATH" and click "Install Now." <i>"Add Python to PATH" is a setting that tells your computer to remember where Python is located, so you can use it from anywhere.</i>

  3. Verify Installation: Open Command Prompt and type python --version to check the installed version. <i>Any issues just type exit in the terminal and reopen it. This should help, then run python --version again.</i> If you have any issues just type exit in the terminal and reopen it. This should help, then run python --version again.`

  4. python --version

macOS:

  1. Use Homebrew: Open Terminal and install Homebrew if you haven't:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

  2. Install Python: Run:

    brew update brew upgrade brew install python

  • Verify Installation:
  • Open Command Prompt and type python --version to check the installed version.
  • Any issues just type exit in the terminal and reopen it. This should help, then run python --version again.

Linux:

  1. Use Package Manager: For Ubuntu, open Terminal and run.

    sudo apt update sudo apt install python3

Verify Installation:

  • Open Command Prompt and type python --version to check the installed version.
  • Any issues just type exit in the terminal and reopen it. This should help, then run python --version again.

If your version is lower than 3.7, you'll need to update Python to use this library. You can download the latest version of Python from the official Python website.

Pip Install Modules:

πŸš€ Explanation:

To use the OpenAI library, you need to use the pip command. This is like installing a new app on your phone, but for Python. Imagine you are adding a new tool to your toolbox so you can use it in your programming projects. The command pip install is like telling your computer to go to the Python app store (PyPI) and download the OpenAI tool for you.

To use the .env file we will need a module called python-dotenv and it will need to be installed as this package allows you to load environment variables from a .env file into your environment, this file is where you add you API key into which is useful for keeping sensitive information like API keys out of your codebase.

pip install openai pip install python-dotenv

πŸ’‘ Explanation:

Here's how you use the library to talk to the AI models. Think of this like having a conversation with your smart robot assistant. You set up the connection, ask it to say something, and then it responds. Let's break it down:

  1. First, you import the necessary tools (os and OpenAI).
  2. Then, you create a "client" - think of this as establishing a phone line to the AI.
  3. You send a message to the AI, just like texting a friend.
  4. The AI processes your message and sends back a response.

Β 

Section 2: Understanding the OpenAI Python Library

Β 

OpenAI Python API Introduction

Β 

How to use the OpenAI Python Library code in a script.

This code sets up the AI client and asks it to say "This is a test." And it should be displayed in the terminal. It's like teaching a parrot to repeat a phrase!

Steps:

  1. Create a new file and save it as test_openai.py
  2. Open the .env and place you key after the = sign.
  3. OPENAI_API_KEY=sk-putyourkeyhere > replace sk-putyourkeyhere
  4. Save the file.
  5. You can now use the API key in your script by adding from dotenv import load_dotenv
  6. To load environment variables (your API Key ) from .env file into your script you will need to add load_dotenv()into the script. See below for an example.
  7. Copy the code below and save it as test_openai.py
  8. Then inside the terminal type test_openai.py

The full API of this library can be found in api.md.

While you can provide an api_key keyword argument, we recommend using python-dotenv to be able to use OpenAi API Keys in your script.

#!/usr/bin/env python3 # Shebang line to specify the interpreter # Import the required libraries import os # import os module to access environment variables from openai import OpenAI # import OpenAI module from dotenv import load_dotenv # import dotenv module load_dotenv() # Load environment variables (loads your API Key) from .env file client = OpenAI( # initialize OpenAI client api_key=os.getenv("OPENAI_API_KEY"), # loads your API Key from .env file ) chat_completion = client.chat.completions.create( # create a chat completion messages=[ # messages is a list of messages { "role": "user", # role is the person speaking "content": "Say this is a test", # content is the message } ], model="gpt-4o", # model is the AI model you want to use ) print(chat_completion.choices[0].message.content) # print the response from the AI model into the terminal
> python test_openai.py This is a test. This is what you should see afterwards. This means that you are on the right path and connected your script to the ChatGPT and using the GPT-4o Model using openai and dotenv modules. Congratulations! 

Β 

Polling Helpers

When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes helper functions which will poll the status until it reaches a terminal state and then return the resulting object. If an API method results in an action that could benefit from polling there will be a corresponding version of the method ending in '_and_poll'.

⏳ Explanation:

Some actions take time to complete, like starting a process or uploading files. Polling helpers keep checking until these actions are done. Imagine you are baking a cake and you keep checking the oven until the cake is ready. In this case, you're starting a task (like asking the AI to do some work) and then waiting until it's finished before moving on. The create_and_poll function does this waiting for you automatically, so you don't have to keep checking manually.

Running a Thread

To create and poll a run within a thread using the OpenAI API, follow these steps:

Overview

The following example demonstrates how to initiate a run within a specific thread and automatically poll for its status. This can be useful for tracking the progress of a long-running task, such as a conversation or a job.

Example Code

# Assuming you have already set up the OpenAI client and have a thread and assistant run = client.beta.threads.runs.create_and_poll( thread_id=thread.id, assistant_id=assistant.id, )

Overview

  • thread_id: The unique identifier for the thread in which you want to run the task. This is essential to specify the context of the run.
  • assistant_id: The unique identifier for the assistant you want to use. This could be an AI model or a specific assistant configuration.
  • The script below will use the code and create the identifiers once the script has ran. Copy these to the .env file afterwards. More info below.
  • More information on the lifecycle of a Run can be found in the Run Lifecycle Documentation

Usage

  1. Setup : Make sure you have the necessary API credentials and have initialized the OpenAI client properly.
  2. Run Creation : Use the create_and_poll method to start a new run within the specified thread.
  3. Polling : The function automatically polls the run's status, providing updates until the task is complete.

Example of Code using Polling Helpers and Running a Thread

  1. Lets use our created file and script called test_openai.py

  2. Copy and Paste the code below into the file, make sure the file is in the same folder as the .env file.

  3. Best make a new folder for this page and make separate files for each script. This is handy to see the differences and learn.

  4. Make sure you have your .env file with OPENAI_API_KEY=yourkey

  5. You will be able to run the script and it will create a new thread and assistant ID. To keep the converstation going over time and we will keep the same thread and assistant ID. Once we run this script it will generate the ID's, then copy the thread and assistant ID and place them into the .env file below the OPENAI_API_KEY.

  6. Run the python script.

  7. python test_openai.py 

Β 

test_openai.py
#!/usr/bin/env python3 import os from openai import OpenAI from dotenv import load_dotenv # Load environment variables load_dotenv() # Initialize OpenAI client client = OpenAI( api_key=os.getenv("OPENAI_API_KEY"), ) # Function to get or create IDs def get_or_create_ids(): thread_id = os.getenv("THREAD_ID") assistant_id = os.getenv("ASSISTANT_ID") if not thread_id: thread = client.beta.threads.create() thread_id = thread.id print(f"New thread created. Thread ID: {thread_id}") update_env_file("THREAD_ID", thread_id) else: print(f"Using existing Thread ID: {thread_id}") if not assistant_id: assistant = client.beta.assistants.create( name="My Assistant", instructions="You are a helpful assistant.", model="gpt-4", ) assistant_id = assistant.id print(f"New assistant created. Assistant ID: {assistant_id}") update_env_file("ASSISTANT_ID", assistant_id) else: print(f"Using existing Assistant ID: {assistant_id}") return thread_id, assistant_id # Function to update .env file def update_env_file(key, value): env_path = '.env' with open(env_path, 'a') as f: f.write(f"\n{key}={value}") # Get or create thread and assistant IDs thread_id, assistant_id = get_or_create_ids() # Create a message in the thread message = client.beta.threads.messages.create( thread_id=thread_id, role="user", content="Say this is a test", ) # Create and poll a run run = client.beta.threads.runs.create_and_poll( thread_id=thread_id, assistant_id=assistant_id, ) # Retrieve and print the assistant's response messages = client.beta.threads.messages.list(thread_id=thread_id) assistant_message = next((msg for msg in messages if msg.role == "assistant"), None) if assistant_message: print("Assistant's response:") print(assistant_message.content[0].text.value) else: print("No response from the assistant.") 

Β 

Output

You should see the following output in the terminal: If this is your first time running the script you will see the following output: And congrats you have just created your first thread and assistant ID.

THREAD_ID=thread_jQZNE3hs968JWWZAPiB2Tk2C ASSISTANT_ID=asst_vnInhkMyxNkcON1UZpJylQN8

NB

Once we run this script it will generate the ID's, then copy the thread and assistant ID and place them into the .env file below the OPENAI_API_KEY. This should add the ID's into the .env file automatically. Best take a look at the .env file and make sure it is correct.

Once ran this script will use the same thread and assistant ID. To keep the conversation going over time. We will keep the same thread and assistant ID for this example. Β 

Bulk Upload Helpers

Β 

OpenAI Python API Introduction

You can upload your documents and the script will be able to answer questions on the documents you uploaded.

When creating and interacting with vector stores, you can use polling helpers to monitor the status of operations.

For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.

For more information about what kind of files can be uploaded and more code, please go to OpenAI File Search

Β 

πŸ“€ Explanation:

You can upload multiple files at once and check their status. This is like sending a bunch of letters at the post office and waiting to see when they are all delivered.

In programming terms, you're sending multiple files to the AI system at the same time, which can save a lot of time compared to uploading them one by one.

The upload_and_poll function takes care of sending all the files and waiting until they're all properly received and processed.

sample_files = [Path("sample-paper.pdf"), ...] batch = await client.vector_stores.file_batches.upload_and_poll( store.id, files=sample_files, )
  1. Copy the code below and paste it into your script.
  2. Now next step is to run the script, and have a file ready to upload to the AI Database.
 python test_openai.py 

Β 

Upload File with chat.py
# Import the required libraries import os import sys import json from openai import OpenAI from datetime import datetime from dotenv import load_dotenv from termcolor import colored from pathlib import Path # Load environment variables (loads your API Key) from .env file load_dotenv() # Initialize OpenAI client client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) # Define the model engine model_engine = "gpt-4o" # Ensure this is the correct model ID # Define the assistant's role assistant_role = "You are a useful helper, professor, the best programmer in the world, and computer technician in the style and tone of Christopher Walken. you are a genius programmer and expert at all technology and languages, you are best when you love to help people, provide suggestions for improvements and also to double-check your code to check for errors, as we all make them, and give detailed step-by-step instructions as if I am 14 years old and only learning, but I have some basics and understanding of python code, but I love to learn so explain everything to me." # Define user and bot names user_name = "Ranger" bot_name = "Jervis" # Define the thread and assistant IDs (these would typically be obtained from previous API calls or setup) thread_id = os.getenv("THREAD_ID", None) assistant_id = os.getenv("ASSISTANT_ID", None) vector_store_id = os.getenv("VECTOR_STORE_ID", None) def create_thread_and_assistant(): global thread_id, assistant_id, vector_store_id thread = client.beta.threads.create() thread_id = thread.id assistant = client.beta.assistants.create( name="Programming genius Assistant. Use your knowledge base to answer questions about python json and all other programming languages.", instructions=assistant_role, model=model_engine, tools=[{"type": "file_search"}], ) assistant_id = assistant.id print(f"Created new thread ID: {thread_id}") print(f"Created new assistant ID: {assistant_id}") # Create thread and assistant if they don't exist if not thread_id or not assistant_id: create_thread_and_assistant() # Function to update .env file def update_env_file(key, value): env_path = '.env' with open(env_path, 'a') as f: f.write(f"\n{key}={value}") def upload_files_to_vector_store(file_paths): global vector_store_id if not vector_store_id: vector_store = client.beta.vector_stores.create(name="Financial Statements") vector_store_id = vector_store.id update_env_file("VECTOR_STORE_ID", vector_store_id) # Save vector_store_id to .env file file_streams = [open(path, "rb") for path in file_paths] file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store_id, files=file_streams ) print(file_batch.status) print(file_batch.file_counts) def update_assistant_with_vector_store(): client.beta.assistants.update( assistant_id=assistant_id, tool_resources={"file_search": {"vector_store_ids": [vector_store_id]}}, ) def chat_gpt4(query, files=None): if files: upload_files_to_vector_store(files) update_assistant_with_vector_store() # Add the user's message to the thread client.beta.threads.messages.create( thread_id=thread_id, role="user", content=query ) # Create and poll a new run within the specified thread run = client.beta.threads.runs.create_and_poll( thread_id=thread_id, assistant_id=assistant_id, ) # Optionally, handle the result print(f"Run ID: {run.id}") print(f"Status: {run.status}") # Retrieve the messages added by the assistant to the thread messages = client.beta.threads.messages.list( thread_id=thread_id ) # Print the response from the model if messages.data: print(f"{bot_name}: {messages.data[0].content[0].text.value}") else: print("No messages found.") # Create a conversation log conversation_log = [] try: # Generate a chat completion response = client.chat.completions.create( model=model_engine, messages=[ {"role": "system", "content": assistant_role}, {"role": "user", "content": query} ], temperature=0.9, max_tokens=2048, top_p=1, frequency_penalty=0, presence_penalty=0.1, stop=[" Human:", " AI:"] ) response_content = response.choices[0].message.content.strip() # Print the response from the model print(f"{bot_name}: {response_content}") # Add the user's query and the assistant's response to the conversation log conversation_log.append({"role": "user", "content": query}) conversation_log.append({"role": "assistant", "content": response_content}) except Exception as e: print(colored(f"Error: {e}", "red")) return # Save the conversation to a text file with open('rgpt4.txt', 'a', encoding='utf-8') as file: file.write("=== GPT-4 Chat started at {} ===\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S"))) for entry in conversation_log: file.write(f"[{entry['role'].capitalize()}]: {entry['content']}\n") file.write("=== GPT-4 Chat ended at {} ===\n\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S"))) # Save the conversation to a JSON file with open('rgpt4.json', 'a', encoding='utf-8') as json_file: json.dump(conversation_log, json_file, ensure_ascii=False, indent=4) json_file.write('\n') def main(): if len(sys.argv) > 1: # If a question is provided as a command-line argument query = ' '.join(sys.argv[1:]) chat_gpt4(query) else: # Start the conversation print(f"{bot_name}: How can I help?") while True: query = input(f"{user_name}: ") if query.lower() in ["exit", "quit"]: break # Check if the user wants to upload files if query.lower() == "upload": file_paths = input("Enter the file paths (comma-separated): ").split(',') files = [path.strip() for path in file_paths] chat_gpt4(query, files=files) else: chat_gpt4(query) follow_up = input(f"{bot_name}: Do you have another question? (yes/no): ") if follow_up.lower() not in ["yes", "y"]: break if __name__ == "__main__": main()
  1. After running the file you should get something like this below. Again (p is my aliases for the word python).
  2. When Jervis asks 'How can I help?" You can either ask a question or type upload and it will ask you for the files location. I usually right click on a file and click the copy path and use that (I am using a Macbook Pro M3).

1722374227549

To ask questions about a file you've uploaded, you need to ensure that the file is part of a vector store that the assistant can query. The vector store is a conceptual database where the content of your files is indexed and can be searched. Here's how you can modify the script to handle file uploads, create a vector store, and allow you to ask questions about the uploaded files:

Upload Files to Vector Store :

  • When you type "upload", the script will prompt you for file paths and upload these files to a vector store.

Ask Questions About Uploaded Files :

  • After uploading the files, you can ask questions about the content of these files, and the assistant will search the vector store for relevant information to answer your questions.

Vector Store Location :

  • The vector store is managed by the OpenAI API. You don't need to worry about its physical location; you just need to ensure that the files are uploaded and indexed correctly.

Β 

Streaming Helpers

Β 

OpenAI Python API Introduction

Β 

The SDK also includes helpers to process streams and handle incoming events.

OpenAI supports streaming responses when interacting with the Assistant APIs.

πŸ”„ Explanation:

You can stream responses from the AI, which means you get parts of the response as they come in, instead of waiting for the whole thing. It's like watching a YouTube video as it loads rather than waiting for the entire video to download first. In this code:

  1. You start a "stream" of information from the AI.
  2. You give some instructions to the AI (like how to address the user).
  3. As the AI generates its response, you get pieces of it one at a time.
  4. You can process or display these pieces as they arrive, making the interaction feel more real-time and responsive.

This is particularly useful for long responses or when you want to show progress to the user while the AI is thinking.

with client.beta.threads.runs.stream( thread_id=thread.id, assistant_id=assistant.id, instructions="Please address the user as Jane Doe. The user has a premium account.", ) as stream: for event in stream: # Print the text from text delta events if event.type == "thread.message.delta" and event.data.delta.content: print(event.data.delta.content[0].text)

More information on streaming helpers can be found in the dedicated documentation: helpers.md

Assistant Streaming API

OpenAI supports streaming responses from Assistants. The SDK provides convenience wrappers around the API so you can subscribe to the types of events you are interested in as well as receive accumulated responses.

More information can be found in the documentation: Assistant Streaming

An example of creating a run and subscribing to some events

You can subscribe to events by creating an event handler class and overloading the relevant event handlers.

from typing_extensions import override from openai import AssistantEventHandler, OpenAI from openai.types.beta.threads import Text, TextDelta from openai.types.beta.threads.runs import ToolCall, ToolCallDelta client = openai.OpenAI() # First, we create a EventHandler class to define # how we want to handle the events in the response stream. class EventHandler(AssistantEventHandler): @override def on_text_created(self, text: Text) -> None: print(f"\nassistant > ", end="", flush=True) @override def on_text_delta(self, delta: TextDelta, snapshot: Text): print(delta.value, end="", flush=True) @override def on_tool_call_created(self, tool_call: ToolCall): print(f"\nassistant > {tool_call.type}\n", flush=True) @override def on_tool_call_delta(self, delta: ToolCallDelta, snapshot: ToolCall): if delta.type == "code_interpreter" and delta.code_interpreter: if delta.code_interpreter.input: print(delta.code_interpreter.input, end="", flush=True) if delta.code_interpreter.outputs: print(f"\n\noutput >", flush=True) for output in delta.code_interpreter.outputs: if output.type == "logs": print(f"\n{output.logs}", flush=True) # Then, we use the `stream` SDK helper # with the `EventHandler` class to create the Run # and stream the response. with client.beta.threads.runs.stream( thread_id="thread_id", assistant_id="assistant_id", event_handler=EventHandler(), ) as stream: stream.until_done()

Full working example of Streaming Helpers

  1. Create a file called chat.py
  2. Copy and Paste the code below into the file, make sure the file is in the same folder as the .env file.
  3. This script will import the keys from the .env file so you can continue your conversation due to having a vector database made to converse with.
  4. Functionality between the synchronous and asynchronous clients is otherwise identical.
  5. Run the script.
  6. python test_openai.py 
# Import the required libraries import os import sys import json from openai import OpenAI from datetime import datetime from dotenv import load_dotenv from termcolor import colored from typing_extensions import override # Load environment variables (loads your API Key) from .env file load_dotenv() # Initialize OpenAI client client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) # Define the model engine model_engine = "gpt-4o" # Ensure this is the correct model ID # Define the assistant's role assistant_role = "You are a useful helper, professor, the best programmer in the world, and computer technician in the style and tone of Christopher Walken. you are a genius programmer and expert at all technology and languages, you are best when you love to help people, provide suggestions for improvements and also to double-check your code to check for errors, as we all make them, and give detailed step-by-step instructions as if I am 14 years old and only learning, but I have some basics and understanding of python code, but I love to learn so explain everything to me." # Define user and bot names user_name = "Ranger" bot_name = "Jervis" # Define the thread and assistant IDs (these would typically be obtained from previous API calls or setup) thread_id = os.getenv("THREAD_ID", None) assistant_id = os.getenv("ASSISTANT_ID", None) vector_store_id = os.getenv("VECTOR_STORE_ID", None) def create_thread_and_assistant(): global thread_id, assistant_id, vector_store_id thread = client.beta.threads.create() thread_id = thread.id assistant = client.beta.assistants.create( name="Programming genius Assistant. Use your knowledge base to answer questions about python json and all other programming languages.", instructions=assistant_role, model=model_engine, tools=[{"type": "file_search"}], ) assistant_id = assistant.id print(f"Created new thread ID: {thread_id}") print(f"Created new assistant ID: {assistant_id}") # Create thread and assistant if they don't exist if not thread_id or not assistant_id: create_thread_and_assistant() def upload_files_to_vector_store(file_paths): global vector_store_id if not vector_store_id: vector_store = client.beta.vector_stores.create(name="Financial Statements") vector_store_id = vector_store.id file_streams = [open(path, "rb") for path in file_paths] file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store_id, files=file_streams ) print(file_batch.status) print(file_batch.file_counts) def update_assistant_with_vector_store(): client.beta.assistants.update( assistant_id=assistant_id, tool_resources={"file_search": {"vector_store_ids": [vector_store_id]}}, ) class EventHandler(AssistantEventHandler): @override def on_text_created(self, text) -> None: print(f"\n{bot_name} > ", end="", flush=True) @override def on_text_delta(self, delta, snapshot): print(delta.value, end="", flush=True) @override def on_tool_call_created(self, tool_call): print(f"\n{bot_name} > {tool_call.type}\n", flush=True) @override def on_tool_call_delta(self, delta, snapshot): if delta.type == "code_interpreter" and delta.code_interpreter: if delta.code_interpreter.input: print(delta.code_interpreter.input, end="", flush=True) if delta.code_interpreter.outputs: print(f"\n\noutput >", flush=True) for output in delta.code_interpreter.outputs: if output.type == "logs": print(f"\n{output.logs}", flush=True) def chat_gpt4(query, files=None): if files: upload_files_to_vector_store(files) update_assistant_with_vector_store() # Add the user's message to the thread client.beta.threads.messages.create( thread_id=thread_id, role="user", content=query ) # Create and poll a new run within the specified thread with client.beta.threads.runs.stream( thread_id=thread_id, assistant_id=assistant_id, event_handler=EventHandler(), ) as stream: stream.until_done() # Create a conversation log conversation_log = [] try: # Generate a chat completion response = client.chat.completions.create( model=model_engine, messages=[ {"role": "system", "content": assistant_role}, {"role": "user", "content": query} ], temperature=0.9, max_tokens=2048, top_p=1, frequency_penalty=0, presence_penalty=0.1, stop=[" Human:", " AI:"] ) response_content = response.choices[0].message.content.strip() # Add the user's query and the assistant's response to the conversation log conversation_log.append({"role": "user", "content": query}) conversation_log.append({"role": "assistant", "content": response_content}) except Exception as e: print(colored(f"Error: {e}", "red")) return # Save the conversation to a text file with open('rgpt4.txt', 'a', encoding='utf-8') as file: file.write("=== GPT-4 Chat started at {} ===\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S"))) for entry in conversation_log: file.write(f"[{entry['role'].capitalize()}]: {entry['content']}\n") file.write("=== GPT-4 Chat ended at {} ===\n\n".format(datetime.now().strftime("%d/%m/%Y %H:%M:%S"))) # Save the conversation to a JSON file with open('rgpt4.json', 'a', encoding='utf-8') as json_file: json.dump(conversation_log, json_file, ensure_ascii=False, indent=4) json_file.write('\n') def main(): if len(sys.argv) > 1: # If a question is provided as a command-line argument query = ' '.join(sys.argv[1:]) chat_gpt4(query) else: # Start the conversation print(f"{bot_name}: How can I help?") while True: query = input(f"{user_name}: ") if query.lower() in ["exit", "quit"]: break # Check if the user wants to upload files if query.lower() == "upload": file_paths = input("Enter the file paths (comma-separated): ").split(',') files = [path.strip() for path in file_paths] chat_gpt4(query, files=files) else: chat_gpt4(query) follow_up = input(f"{bot_name}: Do you have another question? (yes/no): ") if follow_up.lower() not in ["yes", "y"]: break if __name__ == "__main__": main()

Async usage

Simply import AsyncOpenAI instead of OpenAI and use await with each API call:

import os import asyncio from openai import AsyncOpenAI client = AsyncOpenAI( # This is the default and can be omitted api_key=os.environ.get("OPENAI_API_KEY"), ) async def main() -> None: chat_completion = await client.chat.completions.create( messages=[ { "role": "user", "content": "Say this is a test", } ], model="gpt-3.5-turbo", ) asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

πŸ”„ Explanation:

You can use the library with asynchronous code, which lets your program do other things while waiting for the AI to respond. It's like cooking several dishes at once instead of one after the other. Here's what's happening:

  1. You import a special version of the OpenAI client that works asynchronously.
  2. You define a function (main()) that uses await to talk to the AI.
  3. This allows your program to do other tasks while it's waiting for the AI's response.
  4. Finally, you run this async function using asyncio.run(main()).

This is particularly useful in applications that need to handle multiple tasks simultaneously, like web servers or interactive applications.

Streaming responses

We provide support for streaming responses using Server Side Events (SSE).

from openai import OpenAI client = OpenAI() stream = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Say this is a test"}], stream=True, ) for chunk in stream: print(chunk.choices[0].delta.content or "", end="")
πŸ”„ Explanation:

Streaming responses allow you to get and process the AI's reply piece by piece, as it's being generated. It's like reading a book as it's being written, page by page, instead of waiting for the entire book to be finished. This code:

  1. Sets up a streaming connection to the AI.
  2. Asks the AI to say "this is a test".
  3. As the AI generates its response, it sends back small "chunks" of text.
  4. The code prints out each chunk as it arrives, creating a smooth, flowing output.

This is great for creating more responsive and interactive applications, especially when dealing with longer AI responses.

The async client uses the exact same interface.

from openai import AsyncOpenAI client = AsyncOpenAI() async def main(): stream = await client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Say this is a test"}], stream=True, ) async for chunk in stream: print(chunk.choices[0].delta.content or "", end="") asyncio.run(main())

Module-level client

Important

We highly recommend instantiating client instances instead of relying on the global client.

We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.

import openai # optional; defaults to `os.environ['OPENAI_API_KEY']` openai.api_key = '...' # all client options can be configured just like the `OpenAI` instantiation counterpart openai.base_url = "https://..." openai.default_headers = {"x-foo": "true"} completion = openai.chat.completions.create( model="gpt-4", messages=[ { "role": "user", "content": "How do I output all files in a directory using Python?", }, ], ) print(completion.choices[0].message.content)

The API is the exact same as the standard client instance-based API.

This is intended to be used within REPLs or notebooks for faster iteration, not in application code.

We recommend that you always instantiate a client (e.g., with client = OpenAI()) in application code because:

  • It can be difficult to reason about where client options are configured
  • It's not possible to change certain client options without potentially causing race conditions
  • It's harder to mock for testing purposes
  • It's not possible to control cleanup of network connections
πŸ”§ Explanation:

This section talks about a global client, which is like having a universal remote that works for all your devices. However, just like a universal remote might not have all the special features for each specific device, using a global client isn't always the best choice for complex programs. Here's what's happening:

  1. You set up a global OpenAI client that can be used anywhere in your code.
  2. You can configure various options for this client, like the API key and default settings.
  3. You can then use this client to interact with the AI, like asking it how to list files in a directory.

While this method is simple and can be useful for quick experiments or small scripts, for larger projects, it's better to create specific client instances for different parts of your program. This gives you more control and makes your code easier to manage and test.

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set python.analysis.typeCheckingMode to basic.

Using Types

Nested request parameters are TypedDicts. Responses are Pydantic models which also provide helper methods for things like:

  • Serializing back into JSON, model.to_json()
  • Converting to a dictionary, model.to_dict()
from openai import OpenAI client = OpenAI() completion = client.chat.completions.create( messages=[ { "role": "user", "content": "Can you generate an example json object describing a fruit?", } ], model="gpt-3.5-turbo-1106", response_format={"type": "json_object"}, )
πŸ› οΈ Explanation:

The library uses typed requests and responses, which means it can help you catch mistakes while you write your code. Think of it as having a spell-checker for your programming instructions. Here's what this means:

  1. When you send requests to the AI, you use special Python dictionaries (TypedDicts) that help ensure you're providing the right kind of information.
  2. When you get responses back, they come as Pydantic models, which are like smart containers for data.
  3. These models have helpful methods, like turning the data back into JSON or into a regular Python dictionary.

In the example, we're asking the AI to create a JSON object describing a fruit. The library ensures we're formatting our request correctly and helps us work with the response easily. This type system acts like a safety net, catching potential errors before they cause problems in your program.

Pagination

List methods in the OpenAI API are paginated.

This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

import openai client = OpenAI() all_jobs = [] # Automatically fetches more pages as needed. for job in client.fine_tuning.jobs.list( limit=20, ): # Do something with job here all_jobs.append(job) print(all_jobs)
πŸš€ Explanation:

Imagine you're reading a really long book, but instead of giving you the whole book at once, the library gives you 20 pages at a time. This code is like a magical bookmark that automatically gets the next 20 pages for you when you finish reading the current ones. You don't have to worry about asking for the next part - it just happens! In this case, instead of pages, we're getting information about AI training jobs, 20 at a time.

Or, asynchronously:

import asyncio import openai client = AsyncOpenAI() async def main() -> None: all_jobs = [] # Iterate through items across all pages, issuing requests as needed. async for job in client.fine_tuning.jobs.list( limit=20, ): all_jobs.append(job) print(all_jobs) asyncio.run(main())
πŸƒβ€β™‚οΈ Explanation:

This is like the previous example, but it's even cooler! Imagine you're in a relay race where you can start the next runner before the current one finishes. This code does something similar - it starts getting the next batch of information while it's still processing the current one. It's a way to make things happen faster, especially when you're dealing with lots of data.

Alternatively, you can use the .has_next_page(), .next_page_info(), or .get_next_page() methods for more granular control working with pages:

first_page = await client.fine_tuning.jobs.list( limit=20, ) if first_page.has_next_page(): print(f"will fetch next page using these details: {first_page.next_page_info()}") next_page = await first_page.get_next_page() print(f"number of items we just fetched: {len(next_page.data)}") # Remove `await` for non-async usage.

Or just work directly with the returned data:

first_page = await client.fine_tuning.jobs.list( limit=20, ) print(f"next page cursor: {first_page.after}") # => "next page cursor: ..." for job in first_page.data: print(job.id) # Remove `await` for non-async usage.
πŸ“‚ Explanation:

This is like getting a page of a book and a bookmark that shows where the next page starts. You can look at all the information on the current page (printing each job's ID), and you also know where to start reading next (the "next page cursor"). It's a way to keep track of where you are in all the information, just like how you might use a bookmark to remember your place in a big book.

Some API responses are too large to send all at once, so they are split into pages. The library can automatically handle fetching these pages for you. It's like getting a long book in several smaller, manageable volumes instead of one big, heavy book. Here's how it works:

  1. You start a request to list something, like jobs for fine-tuning AI models.
  2. You set a limit (in this case, 20) for how many items you want per page.
  3. The library automatically fetches new pages as you go through the list.
  4. You can process each item (job) as it comes in, without worrying about the pagination.

This makes it much easier to work with large amounts of data, as you don't have to manually keep track of which page you're on or when to request the next page. The library handles all of that for you behind the scenes.

Nested params

Nested parameters are dictionaries, typed using TypedDict, for example:

from openai import OpenAI client = OpenAI() completion = client.chat.completions.create( messages=[ { "role": "user", "content": "Can you generate an example json object describing a fruit?", } ], model="gpt-3.5-turbo-1106", response_format={"type": "json_object"}, )
πŸ”„ Explanation:

Nested parameters allow you to organize complex information in a structured way, like having folders inside folders on your computer. Here's what's happening in this code:

  1. We create an OpenAI client to communicate with the AI.
  2. We use the chat.completions.create method to generate a response.
  3. The messages parameter is a list containing a dictionary. This dictionary has two nested key-value pairs: "role" and "content".
  4. We specify the AI model to use with the model parameter.
  5. The response_format parameter is another nested dictionary, telling the AI to respond with a JSON object.

This nested structure allows us to provide detailed and organized instructions to the AI. In this case, we're asking it to generate a JSON object describing a fruit. The use of TypedDict helps ensure that we're formatting these nested parameters correctly, reducing the chance of errors in our code.

File uploads

Request parameters that correspond to file uploads can be passed as bytes, a PathLike instance or a tuple of (filename, contents, media type).

from pathlib import Path from openai import OpenAI client = OpenAI() client.files.create( file=Path("input.jsonl"), purpose="fine-tune", )
πŸ”§ Explanation:

The async client uses the exact same interface. If you pass a PathLike instance, the file contents will be read asynchronously automatically.

You can upload files directly to the API, which can be used for things like fine-tuning models. It's like uploading a document to a website so that the site can use the information in the document. In this example:

  1. We import the Path class to work with file paths easily.
  2. We create an OpenAI client.
  3. We use the files.create method to upload a file.
  4. We specify the file path and its purpose (in this case, for fine-tuning a model).

This is useful when you need to provide large amounts of data to the AI, such as training data for customizing models.

Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of openai.APIConnectionError is raised.

When the API returns a non-success status code (that is, 4xx or 5xx response), a subclass of openai.APIStatusError is raised, containing status_code and response properties.

All errors inherit from openai.APIError.

import openai from openai import OpenAI client = OpenAI() try: client.fine_tuning.jobs.create( model="gpt-3.5-turbo", training_file="file-abc123", ) except openai.APIConnectionError as e: print("The server could not be reached") print(e.__cause__) # an underlying Exception, likely raised within httpx. except openai.RateLimitError as e: print("A 429 status code was received; we should back off a bit.") except openai.APIStatusError as e: print("Another non-200-range status code was received") print(e.status_code) print(e.response)

Error codes are as followed:

Status Code Error Type
400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A APIConnectionError
⚠️ Explanation:

The library provides error handling for different types of errors that can occur while interacting with the API. It's like having a plan for what to do if something goes wrong while you're working on a project. Here's what's happening:

  1. We set up a try-except block to catch different types of errors.
  2. We attempt to create a fine-tuning job.
  3. If there's a connection error, we catch it and print a message.
  4. If we hit a rate limit (too many requests), we catch that specific error.
  5. For any other API errors, we catch them and print details about the error.

This error handling helps you write more robust code that can gracefully handle problems when they occur, rather than crashing unexpectedly.

Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use the max_retries option to configure or disable retry settings:

from openai import OpenAI # Configure the default for all requests: client = OpenAI( # default is 2 max_retries=0, ) # Or, configure per-request: client.with_options(max_retries=5).chat.completions.create( messages=[ { "role": "user", "content": "How can I get the name of the current day in Node.js?", } ], model="gpt-3.5-turbo", )

πŸ” Explanation:

Some errors are automatically retried by the library. You can configure how many times to retry or disable retries. It's like trying to reconnect your WiFi if it drops the first time. Here's what this code does:

  1. We can set a default number of retries for all requests when creating the client.
  2. We can also set the number of retries for a specific request using with_options().
  3. If an error occurs that's eligible for retry, the library will automatically try again up to the specified number of times.

This feature helps make your application more resilient to temporary network issues or server problems.

Timeouts

By default requests time out after 10 minutes. You can configure this with a timeout option, which accepts a float or an httpx.Timeout object:

from openai import OpenAI # Configure the default for all requests: client = OpenAI( # 20 seconds (default is 10 minutes) timeout=20.0, ) # More granular control: client = OpenAI( timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0), ) # Override per-request: client.with_options(timeout=5.0).chat.completions.create( messages=[ { "role": "user", "content": "How can I list all files in a directory using Python?", } ], model="gpt-3.5-turbo", )

On timeout, an APITimeoutError is thrown.

Note that requests that time out are retried twice by default.

⏲️ Explanation:

You can set how long to wait for a response before timing out. It's like setting a timer for how long you'll wait for a friend before leaving. Here's what's happening:

  1. We can set a default timeout for all requests when creating the client.
  2. We can also set a timeout for a specific request using with_options().
  3. If the API doesn't respond within the specified time, the request will be cancelled and an error will be raised.

This helps prevent your application from hanging indefinitely if there's a problem with the API or network.

Advanced

Logging

We use the standard library logging module.

You can enable logging by setting the environment variable OPENAI_LOG to debug.

$ export OPENAI_LOG=debug
πŸ“œ Explanation:

Logging helps you see what's happening behind the scenes in your application. It's like having a detective's notebook that records everything that happens. By setting the OPENAI_LOG environment variable to debug, you're telling the library to write detailed information about its operations, which can be very helpful for troubleshooting problems.

How to tell whether None means null or missing

In an API response, a field may be explicitly null, or missing entirely; in either case, its value is None in this library. You can differentiate the two cases with .model_fields_set:

if response.my_field is None: if 'my_field' not in response.model_fields_set: print('Got json like {}, without a "my_field" key present at all.') else: print('Got json like {"my_field": null}.')

Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing .with_raw_response. to any HTTP method call, e.g.,

from openai import OpenAI client = OpenAI() response = client.chat.completions.with_raw_response.create( messages=[{ "role": "user", "content": "Say this is a test", }], model="gpt-3.5-turbo", ) print(response.headers.get('X-My-Header')) completion = response.parse() # get the object that `chat.completions.create()` would have returned print(completion)
πŸ”§ Explanation:

These methods return an LegacyAPIResponse object. This is a legacy class as we're changing it slightly in the next major version.

For the sync client this will mostly be the same with the exception of content & text will be methods instead of properties. In the async client, all methods will be async.

A migration script will be provided & the migration in general should be smooth.

.with_streaming_response

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use .with_streaming_response instead, which requires a context manager and only reads the response body once you call .read(), .text(), .json(), .iter_bytes(), .iter_text(), .iter_lines() or .parse(). In the async client, these are async methods.

As such, .with_streaming_response methods return a different APIResponse object, and the async client returns an AsyncAPIResponse object.

with client.chat.completions.with_streaming_response.create( messages=[ { "role": "user", "content": "Say this is a test", } ], model="gpt-3.5-turbo", ) as response: print(response.headers.get("X-My-Header")) for line in response.iter_lines(): print(line)

The context manager is required so that the response will reliably be closed.

Making custom/undocumented requests

This library is typed for convenient access to the documented API.

If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can make requests using client.get, client.post, and other http verbs. Options on the client will be respected (such as retries) will be respected when making this request.

import httpx response = client.post( "/foo", cast_to=httpx.Response, body={"my_param": True}, ) print(response.headers.get("x-foo"))

Undocumented request params

If you want to explicitly send an extra param, you can do so with the extra_query, extra_body, and extra_headers request options.

Undocumented response properties

To access undocumented response properties, you can access the extra fields like response.unknown_prop. You can also get all the extra fields on the Pydantic model as a dict with response.model_extra.

Configuring the HTTP client

You can directly override the httpx client to customize it for your use case, including:

  • Support for proxies
  • Custom transports
  • Additional advanced functionality
from openai import OpenAI, DefaultHttpxClient client = OpenAI( # Or use the `OPENAI_BASE_URL` env var base_url="http://my.test.server.example.com:8083", http_client=DefaultHttpxClient( proxies="http://my.test.proxy.example.com", transport=httpx.HTTPTransport(local_address="0.0.0.0"), ), )

Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is garbage collected. You can manually close the client using the .close() method if desired, or with a context manager that closes when exiting.

Microsoft Azure OpenAI

To use this library with Azure OpenAI, use the AzureOpenAI class instead of the OpenAI class.

Important

The Azure API shape differs from the core API shape which means that the static types for responses / params won't always be correct.

from openai import AzureOpenAI # gets the API Key from environment variable AZURE_OPENAI_API_KEY client = AzureOpenAI( # https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning api_version="2023-07-01-preview", # https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource azure_endpoint="https://example-endpoint.openai.azure.com", ) completion = client.chat.completions.create( model="deployment-name", # e.g. gpt-35-instant messages=[ { "role": "user", "content": "How do I output all files in a directory using Python?", }, ], ) print(completion.to_json())

In addition to the options provided in the base OpenAI client, the following options are provided:

  • azure_endpoint (or the AZURE_OPENAI_ENDPOINT environment variable)
  • azure_deployment
  • api_version (or the OPENAI_API_VERSION environment variable)
  • azure_ad_token (or the AZURE_OPENAI_AD_TOKEN environment variable)
  • azure_ad_token_provider

An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found here.

πŸ”§ Explanation:

If you are using OpenAI through Microsoft Azure, you need to use the AzureOpenAI class. It's like using a different key to unlock the same door. Here's what's happening:

  1. We import the AzureOpenAI class instead of the regular OpenAI class.
  2. We create a client with Azure-specific parameters like api_version and azure_endpoint.
  3. We can then use this client to interact with the AI in the same way as before.

This allows you to use OpenAI's capabilities through Microsoft's Azure cloud platform, which might be preferred for certain business or integration reasons.

Versioning

This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals).
  3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an issue with questions, bugs, or suggestions.

πŸ”„ Explanation:

The library follows versioning rules to ensure backward compatibility. It's like updating an app on your phone to get new features without breaking the old ones. The developers try to make sure that when they release new versions:

  1. Your existing code will still work (backwards-compatibility).
  2. You know what to expect from each update (following SemVer conventions).
  3. You have a way to give feedback or report problems (through GitHub issues).

This helps you keep your projects up-to-date while minimizing the risk of unexpected breaks in your code.

Requirements

  1. Python 3.7 or higher : The OpenAI Python library requires Python version 3.7 or higher. This ensures compatibility with the latest features and security updates of the Python language.
  2. Internet Access : Since the OpenAI API communicates with OpenAI's servers over the internet, you'll need an internet connection to make API requests and receive responses.
  3. OpenAI API Key : To use the OpenAI API, you'll need an API key, which you can obtain by signing up for an account on the OpenAI platform. This key is used to authenticate your requests.
  4. pip (Python package installer) : You need pip to install the OpenAI Python library and its dependencies. Pip usually comes pre-installed with Python, but you can download it if needed.
  5. HTTP Client Library (httpx) : The OpenAI Python library uses the httpx library to make HTTP requests. This library will be installed automatically when you install the OpenAI Python library using pip.
  6. Dependencies : The library may require additional dependencies which will be installed automatically with the library. These dependencies include libraries necessary for making HTTP requests, handling JSON data, and other functionalities.

Python 3.7 or higher.

You need Python 3.7 or higher to use this library. It's like needing a specific version of an app to use certain features. Make sure your Python version is up to date before trying to use this library.

To check your Python version, you can open a terminal or command prompt and type:

python --version

If your version is lower than 3.7, you'll need to update Python to use this library. You can download the latest version of Python from the official Python website (https://www.python.org/downloads/).

Quick Definitions

  • .env: A file used to store environment variables that configure the environment in which your application runs.
  • API: A set of rules that lets different software programs communicate with each other.
  • Asynchronous: Doing multiple things at the same time without waiting for each task to complete one by one.
  • AzureOpenAI: A version of OpenAI that works with Microsoft Azure, a cloud computing service.
  • Chatbots: Programs designed to simulate conversation with human users, especially over the Internet.
  • Endpoints: Specific addresses where APIs can access resources or perform actions.
  • Error Handling: Ways to manage and respond to errors that occur in your program.
  • Fine-Tuning Models: Customizing an AI model with additional training to improve its performance for specific tasks.
  • GitHub OpenAI: The official GitHub organization for OpenAI, containing repositories related to OpenAI's research and projects.
  • Homebrew: A package manager for macOS and Linux, used to install and manage software packages.
  • HTTP: A protocol used for transferring data over the web. It's like the language that computers use to talk to each other on the internet.
  • HTTPS: The secure version of HTTP. It means the data transferred is encrypted and secure.
  • Library: A collection of pre-written code that you can use to make programming easier. Think of it like a toolbox with ready-to-use tools.
  • Nested Parameters: Parameters that are inside other parameters, like a list inside a list.
  • OpenAI: An artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.
  • OpenAI Models: The various AI models offered by OpenAI, including GPT-3, Codex, and DALL-E, which can be accessed via API.
  • Parameters: Pieces of information you provide to a function or request to control how it works.
  • Pip: The package installer for Python, used to install and manage Python packages.
  • Proxy: A server that acts as an intermediary between your computer and the internet.
  • Python: A popular programming language known for its simplicity and readability.
  • QuickStart: A guide provided by OpenAI to quickly get started with their API, including setup and example usage.
  • REST API: A type of API that uses HTTP requests to GET, PUT, POST, and DELETE data.
  • SDK: A Software Development Kit, a collection of software tools and libraries designed to help developers create applications for specific platforms.
  • Stainless: A tool used to generate this library from the OpenAPI specification.
  • Streaming Responses: Getting parts of a response as they come in, rather than waiting for the whole response.
  • Synchronous: Operations that are performed one at a time and must complete before moving on to the next operation.
  • v1 Migration Guide: A guide provided by OpenAI to assist developers in migrating their code to the updated version 1 of the OpenAI Python library.

About

The official Python library for the OpenAI API

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Other 0.2%