Skip to main content

LangFuse Tutorial

LangFuse is open Source Observability & Analytics for LLM Apps Detailed production traces and a granular view on quality, cost and latency

Usage - log all LLM Providers (OpenAI, Azure, Anthropic, Cohere, Replicate, PaLM)​

liteLLM provides callbacks, making it easy for you to log data depending on the status of your responses.

Pre-Requisites​

pip install litellm langfuse

Using Callbacks​

Open In Colab

Get your Langfuse API Keys from https://cloud.langfuse.com/

Use just 2 lines of code, to instantly log your responses across all providers with langfuse:

litellm.success_callback = ["langfuse"]

API keys for Langfuse​

Set the following variables in your .env

# Required 
os.environ["LANGFUSE_SECRET_KEY"]
os.environ["LANGFUSE_PUBLIC_KEY"]

# Optional, defaults to https://cloud.langfuse.com
os.environ["LANGFUSE_HOST"] # optional

Complete code​

import litellm
from litellm import completion
import os

# from https://cloud.langfuse.com/
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""


# OpenAI and Cohere keys
# You can use any of the litellm supported providers: https://docs.litellm.ai/docs/providers
os.environ['OPENAI_API_KEY']=""
os.environ['COHERE_API_KEY']=""

# set langfuse as a callback, litellm will send the data to langfuse
litellm.success_callback = ["langfuse"]

# openai call
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)

print(response)

# cohere call
response = completion(
model="command-nightly",
messages=[
{"role": "user", "content": "Hi 👋 - i'm cohere"}
]
)

print(response)

Advanced​

Set Custom Generation names, pass metadata​

import litellm
from litellm import completion
import os

# from https://cloud.langfuse.com/
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""


# OpenAI and Cohere keys
# You can use any of the litellm supported providers: https://docs.litellm.ai/docs/providers
os.environ['OPENAI_API_KEY']=""

# set langfuse as a callback, litellm will send the data to langfuse
litellm.success_callback = ["langfuse"]

# openai call
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
],
metadata = {
"generation_name": "litellm-ishaan-gen", # set langfuse generation name
# custom metadata fields
"project": "litellm-proxy"
}
)

print(response)

Example - FastAPI Server with LiteLLM + Langfuse​

https://replit.com/@BerriAI/LiteLLMFastAPILangfuse