Tracing and Observability for LangChain with Agenta
LangChain is a framework for developing applications powered by large language models (LLMs). By instrumenting LangChain with Agenta, you can monitor and debug your applications more effectively, gaining insights into each step of your workflows.
This guide shows you how to instrument LangChain applications using Agenta's observability features.
Installation
Install the required packages:
pip install -U agenta openai opentelemetry-instrumentation-langchain langchain langchain-openai
Configure Environment Variables
- Agenta Cloud or Enterprise
- Agenta OSS Running Locally
import os
os.environ["AGENTA_API_KEY"] = "YOUR_AGENTA_API_KEY"
os.environ["AGENTA_HOST"] = "https://cloud.agenta.ai"
import os
os.environ["AGENTA_HOST"] = "http://localhost"
Code Example
This example uses LangChain Expression Language (LCEL) to build a multi-step workflow that generates a joke and then translates it.
import agenta as ag
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain_openai import ChatOpenAI
ag.init()
LangchainInstrumentor().instrument()
def langchain_app():
# Initialize the chat model
llm = ChatOpenAI(temperature=0)
# Create prompt for joke generation
joke_prompt = ChatPromptTemplate.from_messages([
("system", "You are a funny sarcastic nerd."),
("human", "Tell me a joke about {subject}."),
])
# Create prompt for translation
translate_prompt = ChatPromptTemplate.from_messages([
("system", "You are an Elf."),
("human", "Translate the joke below into Sindarin language:\n{joke}"),
])
# Build the chain using LCEL (LangChain Expression Language)
# First chain: generate a joke
joke_chain = joke_prompt | llm | StrOutputParser()
# Second chain: translate the joke
translate_chain = translate_prompt | llm | StrOutputParser()
# Combine the chains: generate joke, then translate it
full_chain = (
{"subject": RunnablePassthrough()}
| RunnableLambda(lambda x: {"joke": joke_chain.invoke(x["subject"])})
| translate_chain
)
# Execute the workflow and print the result
result = full_chain.invoke("OpenTelemetry")
print(result)
# Run the LangChain application
langchain_app()
Explanation
- Initialize Agenta:
ag.init()sets up the Agenta SDK. - Instrument LangChain:
LangchainInstrumentor().instrument()instruments LangChain for tracing. This must be called before running your application to ensure all components are traced. - LCEL Chains: The pipe operator (
|) chains components together. Each step's output becomes the next step's input, making it easy to compose complex workflows.
Using Workflows
You can optionally use the @ag.instrument(spankind="WORKFLOW") decorator to create a parent span for your workflow. This is optional, but it's a good practice to instrument the main function of your application.