跟踪 LangGraph🦜🕸️

LangGraph 是一个开源库,用于构建具有 LLM 的有状态、多参与者应用程序,用于创建代理和多代理工作流。
MLflow Tracing 为 LangGraph 提供了自动跟踪功能,作为其 LangChain 集成的扩展。通过调用 mlflow.langchain.autolog() 函数启用 LangChain 的自动跟踪,MLflow 将自动捕获图执行并将其记录到活动的 MLflow Experiment 中。
import mlflow
mlflow.langchain.autolog()
MLflow LangGraph 集成不仅仅是为了跟踪。MLflow 为 LangGraph 提供完整的跟踪体验,包括模型跟踪、依赖管理和评估。请查看 MLflow LangChain Flavor 以了解更多信息!
示例用法
运行以下代码将生成如上视频片段所示的图跟踪。
from typing import Literal
import mlflow
from langchain_core.messages import AIMessage, ToolCall
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
# Enabling tracing for LangGraph (LangChain)
mlflow.langchain.autolog()
# Optional: Set a tracking URI and an experiment
mlflow.set_tracking_uri("https://:5000")
mlflow.set_experiment("LangGraph")
@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
llm = ChatOpenAI(model="gpt-4o-mini")
tools = [get_weather]
graph = create_react_agent(llm, tools)
# Invoke the graph
result = graph.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf?"}]}
)
令牌使用跟踪
MLflow >= 3.1.0 支持 LangGraph 的 token 用量跟踪。图调用期间每个 LLM 调用的 token 用量将记录在 mlflow.chat.tokenUsage span 属性中,整个跟踪的总用量将记录在 mlflow.trace.tokenUsage 元数据字段中。
import json
import mlflow
mlflow.langchain.autolog()
# Execute the agent graph defined in the previous example
graph.invoke({"messages": [{"role": "user", "content": "what is the weather in sf?"}]})
# Get the trace object just created
last_trace_id = mlflow.get_last_active_trace_id()
trace = mlflow.get_trace(trace_id=last_trace_id)
# Print the token usage
total_usage = trace.info.token_usage
print("== Total token usage: ==")
print(f" Input tokens: {total_usage['input_tokens']}")
print(f" Output tokens: {total_usage['output_tokens']}")
print(f" Total tokens: {total_usage['total_tokens']}")
# Print the token usage for each LLM call
print("\n== Token usage for each LLM call: ==")
for span in trace.data.spans:
if usage := span.get_attribute("mlflow.chat.tokenUsage"):
print(f"{span.name}:")
print(f" Input tokens: {usage['input_tokens']}")
print(f" Output tokens: {usage['output_tokens']}")
print(f" Total tokens: {usage['total_tokens']}")
== Total token usage: ==
Input tokens: 149
Output tokens: 135
Total tokens: 284
== Token usage for each LLM call: ==
ChatOpenAI_1:
Input tokens: 58
Output tokens: 87
Total tokens: 145
ChatOpenAI_2:
Input tokens: 91
Output tokens: 48
Total tokens: 139
在节点或工具中添加 span
通过将自动跟踪与 手动跟踪 API 相结合,您可以在节点或工具内部添加子 span,以获得该步骤更详细的见解。
以 LangGraph 的 Code Assistant 教程为例。check_code 节点实际上由两种不同的验证组成,用于验证生成的代码。您可能希望为每次验证添加 span,以查看执行了哪次验证。要做到这一点,只需在节点函数内部创建手动 span。
def code_check(state: GraphState):
# State
messages = state["messages"]
code_solution = state["generation"]
iterations = state["iterations"]
# Get solution components
imports = code_solution.imports
code = code_solution.code
# Check imports
try:
# Create a child span manually with mlflow.start_span() API
with mlflow.start_span(name="import_check", span_type=SpanType.TOOL) as span:
span.set_inputs(imports)
exec(imports)
span.set_outputs("ok")
except Exception as e:
error_message = [("user", f"Your solution failed the import test: {e}")]
messages += error_message
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "yes",
}
# Check execution
try:
code = imports + "\n" + code
with mlflow.start_span(name="execution_check", span_type=SpanType.TOOL) as span:
span.set_inputs(code)
exec(code)
span.set_outputs("ok")
except Exception as e:
error_message = [("user", f"Your solution failed the code execution test: {e}")]
messages += error_message
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "yes",
}
# No errors
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "no",
}
这样,check_code 节点的 span 将包含子 span,记录每次验证失败与否以及它们的异常详情。

禁用自动跟踪
可以通过调用 mlflow.langchain.autolog(disable=True) 或 mlflow.autolog(disable=True) 来全局禁用 LangGraph 的自动跟踪。