Langfuse
Open-source LLM observability platform for tracing, evaluating, and debugging AI applications — self-host or use the cloud.
Why Langfuse?
You need full visibility into LLM call traces, costs, and latency
Running evals and A/B testing different prompts or models
Self-hosting observability data for compliance or privacy
Signal Breakdown
What drives the Trust Score
Download Trend
Last 12 months
Tradeoffs & Caveats
Know before you commitTiny hobby project where observability overhead isn't worth it
You're already deeply integrated with LangSmith and LangChain
Pricing
Free tier & paid plans
Free tier: 50k observations/mo
Pro: $59/mo, Team: $499/mo
Alternative Tools
Other options worth considering
LangChain's observability and evaluation platform — trace, debug, and evaluate LLM applications with deep LangChain ecosystem integration.
Lightweight LLM observability via a proxy URL swap — get cost tracking, request logging, and caching with a one-line integration.
Often Used Together
Complementary tools that pair well with Langfuse
Learning Resources
Docs, videos, tutorials, and courses
Get Started
Repository and installation options
View on GitHub
github.com/langfuse/langfuse
npm install langfusepip install langfuseQuick Start
Copy and adapt to get going fast
import Langfuse from 'langfuse';
const langfuse = new Langfuse({ secretKey: process.env.LANGFUSE_SECRET_KEY });
const trace = langfuse.trace({ name: 'chat-completion' });
const span = trace.span({ name: 'openai-call' });
// ... make your LLM call ...
span.end({ output: responseText });
await langfuse.flushAsync();Community Notes
Real experiences from developers who've used this tool