Description
The latest release supports streaming audio transcription. Unfortunately the default model that folks have used ("whisper") doesn't work with streaming, but it doesn't just fail; OpenAI's docs highlight that if you use whisper, the stream parameter is simply ignored:
https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-stream
Note: Streaming is not supported for the whisper-1 model and will be ignored.
This means that when the .NET library goes to parse the response, it ends up not yielding anything out of the enumerable.
To me, the problem here is the service's silent ignoring of stream
. But we might want to fix the client to recognize when it's getting back a non-streaming response and surface the data using the stream object model, or throw an exception indicating what's happening, rather than just silently not returning anything (would have saved me some time debugging figuring out what was going wrong).