Loading…
Langfuse · LangSmith · Helicone · Braintrust · 4 tools tracked
Open-source LLM observability platform for tracing, evaluating, and debugging AI applications — self-host or use the cloud.
LangChain's observability and evaluation platform — trace, debug, and evaluate LLM applications with deep LangChain ecosystem integration.
Lightweight LLM observability via a proxy URL swap — get cost tracking, request logging, and caching with a one-line integration.
AI evaluation and observability platform focused on running structured evals, scoring LLM outputs, and prompt iteration workflows.
Missing a tool?
Community members can suggest tools for scoring