This middleware application provides an OpenAI-compatible endpoint for Dify instances, allowing you to use OpenAI API clients with Dify's API. It maintains conversation context and supports both streaming and non-streaming responses.
- Node.js >= 18.0.0
- A running Dify instance with API access
- Your Dify API key
- Clone this repository:
git clone https://github.com/yourusername/dify2openai.git cd dify2openai- Install dependencies:
npm install- Configure environment variables:
# Copy the example environment file cp .env.example .env # Edit .env with your settings nano .env # or use your preferred editorRequired environment variables:
DIFY_API_URL: Your Dify API URL (e.g., http://your-dify-instance/v1)DIFY_API_KEY: Your Dify API key (found in your Dify application settings)PORT: Port number for the middleware server (default: 3000)LOG_LEVEL: Logging verbosity level (default: info)error: Only errorswarn: Errors and warningsinfo: Basic operational info (default)debug: Detailed debugging information
- Start the server:
# Production mode npm start # Development mode with auto-reload npm run dev-
The middleware will run on
http://localhost:3000(or your configured PORT) -
Use any OpenAI API client by pointing it to your middleware URL. Examples:
import OpenAI from 'openai'; const openai = new OpenAI({ baseURL: 'http://localhost:3000/v1', apiKey: 'not-needed' // The middleware uses Dify's API key }); const completion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Hello!' }], model: 'gpt-3.5-turbo', // Model name is ignored, Dify's configured model is used stream: true // Supports both streaming and non-streaming });from openai import OpenAI client = OpenAI( base_url="http://localhost:3000/v1", api_key="not-needed" # The middleware uses Dify's API key ) completion = client.chat.completions.create( messages=[{"role": "user", "content": "Hello!"}], model="gpt-3.5-turbo", # Model name is ignored stream=True # Supports both streaming and non-streaming )curl http://localhost:3000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello!"}] }'- OpenAI API compatibility:
- Supports the chat completions endpoint
- Works with any OpenAI API client
- Maintains conversation context
- Response formats:
- Supports both streaming and non-streaming responses
- Matches OpenAI's response format
- Error handling:
- Graceful error handling and reporting
- OpenAI-compatible error responses
- Development friendly:
- Easy setup with environment variables
- Development mode with auto-reload
- Configurable logging levels
- Detailed debug information
The middleware uses a leveled logging system with timestamps:
[2024-01-23T12:34:56.789Z] [INFO] Dify2OpenAI middleware running on port 3000 Log levels (from least to most verbose):
error: Critical issues that need immediate attentionwarn: Important issues that don't affect core functionalityinfo: General operational information (default)debug: Detailed information for debugging
Configure the log level in your .env file:
# Set to error, warn, info, or debug LOG_LEVEL=infoDebug logs include:
- Conversation ID tracking
- Message format conversion details
- Request/response information
- Streaming events
- Client connections/disconnections
The middleware exposes a single endpoint that mimics OpenAI's chat completions API:
POST /v1/chat/completions
Request format follows OpenAI's specification:
{ "messages": [ {"role": "user", "content": "Hello!"} ], "model": "gpt-3.5-turbo", // Ignored, uses Dify's model "stream": true, // Optional, defaults to false "user": "user123" // Optional }- Only supports the chat completions endpoint
- Model selection is ignored (uses Dify's configured model)
- Some OpenAI-specific parameters may be ignored
- Function calling is not supported
The middleware provides a health check endpoint:
GET /health
Returns {"status": "ok"} when the server is running.
The middleware handles various error cases:
- Invalid request format
- Missing/invalid messages
- API errors from Dify
- Network errors
- Server errors
All errors are returned in a format compatible with OpenAI's error responses:
{ "error": { "message": "Error description", "type": "error_type", "status": 400 // HTTP status code } }