A year ago, I released the first version of LLM API Adapter — a lightweight SDK that unified OpenAI, Anthropic, and Google APIs under one interface.
It got 7 ⭐ on GitHub and valuable feedback from early users.
That was enough motivation to take it to the next level.
What changed in the new version
The new version (v0.2.2) is now:
- SDK-free — it talks directly to provider APIs, no external dependencies.
- Unified — one
chat()interface for all models (OpenAI, Anthropic, Google). - Transparent — automatic token and cost tracking.
- Resilient — consistent error taxonomy across providers (auth, rate, timeout, token limits).
- Tested — 98% unit test coverage.
Example: chat with any LLM
from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter adapter = UniversalLLMAPIAdapter(provider="openai", model="gpt-5") response = adapter.chat([ {"role": "system", "content": "Be concise."}, {"role": "user", "content": "Explain how LLM adapters work."}, ]) print(response.content) Switching models is as simple as changing two parameters:
adapter = UniversalLLMAPIAdapter(provider="anthropic", model="claude-sonnet-4-5") # or adapter = UniversalLLMAPIAdapter(provider="google", model="gemini-2.5-pro") Token & cost tracking example
Every response now includes full token and cost accounting — no manual math needed.
from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter google = UniversalLLMAPIAdapter( organization="google", model="gemini-2.5-pro", api_key=google_api_key ) response = google.chat(**chat_params) print(response.usage.input_tokens, "tokens", f"({response.cost_input} {response.currency})") print(response.usage.output_tokens, "tokens", f"({response.cost_output} {response.currency})") print(response.usage.total_tokens, "tokens", f"({response.cost_total} {response.currency})") Output:
512 tokens (0.00025 USD) 137 tokens (0.00010 USD) 649 tokens (0.00035 USD) Why I built this
Working with multiple LLMs used to mean rewriting the same code — again and again.
Each SDK had its own method names, parameter names, and error classes.
So I built a unified interface that abstracts those details.
One adapter — one consistent experience.
Join the project
You can try it now:
pip install llm-api-adapter Docs & examples: github.com/Inozem/llm_api_adapter
If you like the idea — ⭐ star it or share feedback in Issues.
Top comments (0)