Llm_composer - Elixir library for building LLM-based applications

Hi everyone, at Doofinder we have been building llm_composer for some new apps, and we thought it could be useful to share it with the community.

llm_composer is an Elixir library that simplifies working with large language models (LLMs) like OpenAI’s GPT, OpenRouter, Ollama, AWS Bedrock, and Google (Gemini).

It provides a streamlined way to build and execute LLM-based applications or chatbots, with features such as:

  • Multi-provider support (OpenAI, OpenRouter, Ollama, Bedrock, Google Gemini/Vertex AI).
  • System prompts and message history management.
  • Streaming responses.
  • Function calls with auto-execution.
  • Structured outputs with JSON schema validation.
  • Built-in cost tracking (currently for OpenRouter).
  • Easy extensibility for custom use cases.

A key feature is the provider router that handles failover automatically.
It will use one provider until it fails, then fall back to the next provider in the list, applying an exponential backoff strategy. This makes it resilient in production environments where provider APIs can become temporarily unavailable.

Under the hood, llm_composer uses Tesla as the HTTP client.
For production setups, especially when using streaming, it is recommended to run it with Finch for optimal performance.


More info and docs:

HexDocs: LlmComposer — llm_composer v0.11.1
GitHub:

9 Likes

This looks great :slight_smile:

1 Like

Thanks! :raising_hands:

We’ve also improved the README with clearer examples for different use cases:

  • Streaming responses
  • Using the “Simple” router with fallback + retries
  • Tesla setup and configuration

So it should be much easier to get started now. :rocket:
Feel free to check it out and let us know if you have any feedback!

Thanks for your efforts! :raising_hands:

Could you highlight the differences from ReqLLM, discussed in this topic? I believe both libraries aim to solve the same problem.

3 Likes

We started this lib about a year ago (we’d been using it internally even before that) and only now decided to share it here.

Both libraries aim at the same thing: wrapping LLM APIs behind a unified HTTP client.

The main differences are:

  • HTTP layer: we use Tesla, ReqLLM builds on Req.

  • Routing: llm_composer includes a router with fallback + retries.

  • API style: each library exposes a slightly different way to configure and work with “bots” or requests, so it depends which style you prefer.

2 Likes