LiteLLM
Single Python interface and proxy server for 100+ LLM providers — call any model with the OpenAI SDK format.
MIT
Python
Why LiteLLM?
You need to switch between LLM providers without rewriting code
Building a proxy/gateway to centralize API key management and logging
Experimenting with model cost and latency tradeoffs
Signal Breakdown
What drives the Trust Score
Download Trend
Last 12 months
Tradeoffs & Caveats
Know before you commitYour app is TypeScript-only — use Vercel AI SDK instead
You only use one provider and don't need abstraction overhead
Pricing
Free tier & paid plans
Open source, free to use
LiteLLM Enterprise: custom pricing for SSO, audit logs, RBAC
Alternative Tools
Other options worth considering
Unified TypeScript SDK for building AI-powered streaming UIs with any LLM provider — OpenAI, Anthropic, Google, and more.
Unified API gateway to 200+ LLMs with automatic fallbacks, cost routing, and a single API key — drop-in OpenAI SDK compatible.
Often Used Together
Complementary tools that pair well with LiteLLM
Learning Resources
Docs, videos, tutorials, and courses
Get Started
Repository and installation options
View on GitHub
github.com/BerriAI/litellm
pip install litellmQuick Start
Copy and adapt to get going fast
import litellm
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
# Same code works for claude-3-5-sonnet, gemini/gemini-pro, etc.
print(response.choices[0].message.content)Community Notes
Real experiences from developers who've used this tool