Hi everyone, at Doofinder we have been building llm_composer for some new apps, and we thought it could be useful to share it with the community.
llm_composer is an Elixir library that simplifies working with large language models (LLMs) like OpenAI’s GPT, OpenRouter, Ollama, AWS Bedrock, and Google (Gemini).
It provides a streamlined way to build and execute LLM-based applications or chatbots, with features such as:
- Multi-provider support (OpenAI, OpenRouter, Ollama, Bedrock, Google Gemini/Vertex AI).
- System prompts and message history management.
- Streaming responses.
- Function calls with auto-execution.
- Structured outputs with JSON schema validation.
- Built-in cost tracking (currently for OpenRouter).
- Easy extensibility for custom use cases.
A key feature is the provider router that handles failover automatically.
It will use one provider until it fails, then fall back to the next provider in the list, applying an exponential backoff strategy. This makes it resilient in production environments where provider APIs can become temporarily unavailable.
Under the hood, llm_composer uses Tesla as the HTTP client.
For production setups, especially when using streaming, it is recommended to run it with Finch for optimal performance.
More info and docs:
HexDocs: LlmComposer — llm_composer v0.11.1
GitHub: