Skip to content

tomrailio/vocode-python

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Β  vocode

Build voice-based LLM apps in minutes

Vocode is an open source library that makes it easy to build voice-based LLM apps. Using Vocode, you can build real-time streaming conversations with LLMs and deploy them to phone calls, Zoom meetings, and more. You can also build personal assistants or apps like voice-based chess. Vocode provides easy abstractions and integrations so that everything you need is in a single library.

⭐️ Features

Check out our React SDK here!

πŸ«‚ Contribution

We'd love for you all to build on top of our abstractions to enable new and better LLM voice applications!

You can extend our BaseAgent, BaseTranscriber, and BaseSynthesizer abstractions to integrate with new LLM APIs, speech recognition and speech synthesis providers. More detail here.

You can also work with our BaseInputDevice and BaseOutputDevice abstractions to set up voice applications on new surfaces/platforms. More guides for this coming soon!

Because our StreamingConversation runs locally, it's relatively quick to develop on! Feel free to fork and create a PR and we will get it merged as soon as possible. And we'd love to talk to you on Discord!

πŸš€ Quickstart (Self-hosted)

pip install 'vocode[io]'
import asyncio import signal import vocode from vocode.streaming.streaming_conversation import StreamingConversation from vocode.helpers import create_microphone_input_and_speaker_output from vocode.streaming.models.transcriber import ( DeepgramTranscriberConfig, PunctuationEndpointingConfig, ) from vocode.streaming.agent.chat_gpt_agent import ChatGPTAgent from vocode.streaming.models.agent import ChatGPTAgentConfig from vocode.streaming.models.message import BaseMessage from vocode.streaming.models.synthesizer import AzureSynthesizerConfig from vocode.streaming.synthesizer.azure_synthesizer import AzureSynthesizer from vocode.streaming.transcriber.deepgram_transcriber import DeepgramTranscriber # these can also be set as environment variables vocode.setenv( OPENAI_API_KEY="<your OpenAI key>", DEEPGRAM_API_KEY="<your Deepgram key>", AZURE_SPEECH_KEY="<your Azure key>", AZURE_SPEECH_REGION="<your Azure region>", ) async def main(): microphone_input, speaker_output = create_microphone_input_and_speaker_output( streaming=True, use_default_devices=False ) conversation = StreamingConversation( output_device=speaker_output, transcriber=DeepgramTranscriber( DeepgramTranscriberConfig.from_input_device( microphone_input, endpointing_config=PunctuationEndpointingConfig() ) ), agent=ChatGPTAgent( ChatGPTAgentConfig( initial_message=BaseMessage(text="Hello!"), prompt_preamble="Have a pleasant conversation about life", ), ), synthesizer=AzureSynthesizer( AzureSynthesizerConfig.from_output_device(speaker_output) ), ) await conversation.start() print("Conversation started, press Ctrl+C to end") signal.signal(signal.SIGINT, lambda _0, _1: conversation.terminate()) while conversation.is_active(): chunk = microphone_input.get_audio() if chunk: conversation.receive_audio(chunk) await asyncio.sleep(0) if __name__ == "__main__": asyncio.run(main())

☁️ Quickstart (Hosted)

First, get a free API key from our dashboard.

pip install 'vocode[io]'
import asyncio import signal import vocode from vocode.streaming.hosted_streaming_conversation import HostedStreamingConversation from vocode.streaming.streaming_conversation import StreamingConversation from vocode.helpers import create_microphone_input_and_speaker_output from vocode.streaming.models.transcriber import ( DeepgramTranscriberConfig, PunctuationEndpointingConfig, ) from vocode.streaming.models.agent import ChatGPTAgentConfig from vocode.streaming.models.message import BaseMessage from vocode.streaming.models.synthesizer import AzureSynthesizerConfig vocode.api_key = "<your API key>" if __name__ == "__main__": microphone_input, speaker_output = create_microphone_input_and_speaker_output( streaming=True, use_default_devices=False ) conversation = HostedStreamingConversation( input_device=microphone_input, output_device=speaker_output, transcriber_config=DeepgramTranscriberConfig.from_input_device( microphone_input, endpointing_config=PunctuationEndpointingConfig(), ), agent_config=ChatGPTAgentConfig( initial_message=BaseMessage(text="Hello!"), prompt_preamble="Have a pleasant conversation about life", ), synthesizer_config=AzureSynthesizerConfig.from_output_device(speaker_output), ) signal.signal(signal.SIGINT, lambda _0, _1: conversation.deactivate()) asyncio.run(conversation.start())

πŸ“ž Phone call quickstarts

🌱 Documentation

docs.vocode.dev

About

πŸ€– Build voice-based LLM agents. Modular + open source.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%