LLM Observability
LF

Langfuse

TypeScriptPythonOpen SourceTracing

Open-source LLM observability platform for tracing, evaluating, and debugging AI applications — self-host or use the cloud.

License

MIT

Language

TypeScript

85
Trust
Strong

Why Langfuse?

You need full visibility into LLM call traces, costs, and latency

Running evals and A/B testing different prompts or models

Self-hosting observability data for compliance or privacy

Signal Breakdown

What drives the Trust Score

npm downloads
160k / wk
Commits (90d)
310 commits
GitHub stars
10k ★
Stack Overflow
60 q's
Community
Growing
Weighted Trust Score85 / 100

Download Trend

Last 12 months

Tradeoffs & Caveats

Know before you commit

Tiny hobby project where observability overhead isn't worth it

You're already deeply integrated with LangSmith and LangChain

Pricing

Free tier & paid plans

Free tier

Free tier: 50k observations/mo

Paid

Pro: $59/mo, Team: $499/mo

Alternative Tools

Other options worth considering

LS
LangSmith80Strong

LangChain's observability and evaluation platform — trace, debug, and evaluate LLM applications with deep LangChain ecosystem integration.

He
Helicone73Good

Lightweight LLM observability via a proxy URL swap — get cost tracking, request logging, and caching with a one-line integration.

BT
Braintrust70Good

AI evaluation and observability platform focused on running structured evals, scoring LLM outputs, and prompt iteration workflows.

Often Used Together

Complementary tools that pair well with Langfuse

openai-api

OpenAI API

LLM APIs

87Strong
View
LL

LiteLLM

LLM APIs

82Strong
View
langchain

LangChain

AI Orchestration

96Excellent
View

Learning Resources

Docs, videos, tutorials, and courses

Get Started

Repository and installation options

View on GitHub

github.com/langfuse/langfuse

npmnpm install langfuse
pippip install langfuse

Quick Start

Copy and adapt to get going fast

import Langfuse from 'langfuse';

const langfuse = new Langfuse({ secretKey: process.env.LANGFUSE_SECRET_KEY });

const trace = langfuse.trace({ name: 'chat-completion' });
const span = trace.span({ name: 'openai-call' });

// ... make your LLM call ...

span.end({ output: responseText });
await langfuse.flushAsync();

Community Notes

Real experiences from developers who've used this tool