跳到主要内容

追踪 LangGraph🦜🕸️

LangChain Tracing via autolog

LangGraph 是一个开源库,用于使用大型语言模型(LLM)构建有状态、多角色的应用程序,常用于创建代理和多代理工作流。

MLflow 追踪 作为其 LangChain 集成的扩展,提供了 LangGraph 的自动追踪功能。通过调用 mlflow.langchain.autolog() 函数启用 LangChain 的自动追踪后,MLflow 将自动捕获图的执行轨迹并将其记录到活动的 MLflow 实验中。

import mlflow

mlflow.langchain.autolog()
提示

MLflow LangGraph 集成不仅仅是追踪。MLflow 为 LangGraph 提供了完整的追踪体验,包括模型追踪、依赖管理和评估。请查看MLflow LangChain Flavor以了解更多!

示例用法

运行以下代码将生成一个图的追踪,如上述视频片段所示。

from typing import Literal

import mlflow

from langchain_core.messages import AIMessage, ToolCall
from langchain_core.outputs import ChatGeneration, ChatResult
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# Enabling tracing for LangGraph (LangChain)
mlflow.langchain.autolog()

# Optional: Set a tracking URI and an experiment
mlflow.set_tracking_uri("https://:5000")
mlflow.set_experiment("LangGraph")


@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"


llm = ChatOpenAI(model="gpt-4o-mini")
tools = [get_weather]
graph = create_react_agent(llm, tools)

# Invoke the graph
result = graph.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf?"}]}
)

令牌使用量追踪

MLflow >= 3.1.0 支持 LangGraph 的令牌使用量追踪。图调用期间每次 LLM 调用的令牌使用量将记录在 mlflow.chat.tokenUsage span 属性中,整个追踪中的总使用量将记录在 mlflow.trace.tokenUsage 元数据字段中。

import json
import mlflow

mlflow.langchain.autolog()

# Execute the agent graph defined in the previous example
graph.invoke({"messages": [{"role": "user", "content": "what is the weather in sf?"}]})

# Get the trace object just created
last_trace_id = mlflow.get_last_active_trace_id()
trace = mlflow.get_trace(trace_id=last_trace_id)

# Print the token usage
total_usage = trace.info.token_usage
print("== Total token usage: ==")
print(f" Input tokens: {total_usage['input_tokens']}")
print(f" Output tokens: {total_usage['output_tokens']}")
print(f" Total tokens: {total_usage['total_tokens']}")

# Print the token usage for each LLM call
print("\n== Token usage for each LLM call: ==")
for span in trace.data.spans:
if usage := span.get_attribute("mlflow.chat.tokenUsage"):
print(f"{span.name}:")
print(f" Input tokens: {usage['input_tokens']}")
print(f" Output tokens: {usage['output_tokens']}")
print(f" Total tokens: {usage['total_tokens']}")
== Total token usage: ==
Input tokens: 149
Output tokens: 135
Total tokens: 284

== Token usage for each LLM call: ==
ChatOpenAI_1:
Input tokens: 58
Output tokens: 87
Total tokens: 145
ChatOpenAI_2:
Input tokens: 91
Output tokens: 48
Total tokens: 139

在节点或工具中添加 Span

通过将自动追踪与手动追踪 API 结合使用,您可以在节点或工具内部添加子 span,以获取更详细的步骤洞察。

以 LangGraph 的代码助手教程为例。check_code 节点实际上包含对生成代码的两种不同验证。您可能希望为每个验证添加 span,以查看哪个验证被执行。为此,只需在节点函数内部创建手动 span 即可。

def code_check(state: GraphState):
# State
messages = state["messages"]
code_solution = state["generation"]
iterations = state["iterations"]

# Get solution components
imports = code_solution.imports
code = code_solution.code

# Check imports
try:
# Create a child span manually with mlflow.start_span() API
with mlflow.start_span(name="import_check", span_type=SpanType.TOOL) as span:
span.set_inputs(imports)
exec(imports)
span.set_outputs("ok")
except Exception as e:
error_message = [("user", f"Your solution failed the import test: {e}")]
messages += error_message
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "yes",
}

# Check execution
try:
code = imports + "\n" + code
with mlflow.start_span(name="execution_check", span_type=SpanType.TOOL) as span:
span.set_inputs(code)
exec(code)
span.set_outputs("ok")
except Exception as e:
error_message = [("user", f"Your solution failed the code execution test: {e}")]
messages += error_message
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "yes",
}

# No errors
return {
"generation": code_solution,
"messages": messages,
"iterations": iterations,
"error": "no",
}

这样,check_code 节点的 span 将包含子 span,记录每次验证是否失败以及其异常详情。

LangGraph Child Span

禁用自动跟踪

可以通过调用 mlflow.langchain.autolog(disable=True)mlflow.autolog(disable=True) 来全局禁用 LangGraph 的自动追踪。