跳到主要内容

Agno 追踪

Agno Tracing via autolog

Agno 是一个灵活的代理框架,用于将 LLM、推理步骤、工具和内存协调成一个统一的管道。

MLflow 追踪 为 Agno 提供了自动追踪功能。通过调用 mlflow.agno.autolog() 函数启用 Agno 的自动追踪,MLflow 将捕获 Agent 调用轨迹并将其记录到活动 MLflow 实验中。

python
import mlflow

mlflow.agno.autolog()

MLflow 追踪会自动捕获有关 Agentic 调用的以下信息

  • 提示和完成响应
  • 延迟
  • 关于不同 Agent 的元数据,例如函数名称
  • Token 使用量和成本
  • 缓存命中
  • 如果抛出任何异常

基本示例

安装示例所需的依赖项

bash
pip install 'mlflow>=3.3' agno anthropic yfinance

运行一个启用了 mlflow.agno.autolog() 的简单 Agent

python
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.yfinance import YFinanceTools

agent = Agent(
model=Claude(id="claude-sonnet-4-20250514"),
tools=[YFinanceTools(stock_price=True)],
instructions="Use tables to display data. Don't include any other text.",
markdown=True,
)
agent.print_response("What is the stock price of Apple?", stream=False)

多 Agentic(Agent 之间)交互

当使用 Agno API 的非流式端点时,MLflow 现在可以更轻松地跟踪多个 AI Agent 如何协同工作。它会自动记录 Agent 之间的每次交接、它们交换的消息以及它们使用的任何函数或工具的详细信息——例如输入、输出以及所花费的时间。这为您提供了完整的流程视图,可以更轻松地排除故障、衡量性能并重复结果。

多 Agentic 示例

python
import mlflow

from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.reasoning import ReasoningTools
from agno.tools.yfinance import YFinanceTools

# Enable auto tracing for Agno
mlflow.agno.autolog()


web_agent = Agent(
name="Web Search Agent",
role="Handle web search requests and general research",
model=OpenAIChat(id="gpt-4.1"),
tools=[DuckDuckGoTools()],
instructions="Always include sources",
add_datetime_to_instructions=True,
)

finance_agent = Agent(
name="Finance Agent",
role="Handle financial data requests and market analysis",
model=OpenAIChat(id="gpt-4.1"),
tools=[
YFinanceTools(
stock_price=True,
stock_fundamentals=True,
analyst_recommendations=True,
company_info=True,
)
],
instructions=[
"Use tables to display stock prices, fundamentals (P/E, Market Cap), and recommendations.",
"Clearly state the company name and ticker symbol.",
"Focus on delivering actionable financial insights.",
],
add_datetime_to_instructions=True,
)

reasoning_finance_team = Team(
name="Reasoning Finance Team",
mode="coordinate",
model=Claude(id="claude-sonnet-4-20250514"),
members=[web_agent, finance_agent],
tools=[ReasoningTools(add_instructions=True)],
instructions=[
"Collaborate to provide comprehensive financial and investment insights",
"Consider both fundamental analysis and market sentiment",
"Use tables and charts to display data clearly and professionally",
"Present findings in a structured, easy-to-follow format",
"Only output the final consolidated analysis, not individual agent responses",
],
markdown=True,
show_members_responses=True,
enable_agentic_context=True,
add_datetime_to_instructions=True,
success_criteria="The team has provided a complete financial analysis with data, visualizations, risk assessment, and actionable investment recommendations supported by quantitative analysis and market research.",
)

reasoning_finance_team.print_response(
"""Compare the tech sector giants (AAPL, GOOGL, MSFT) performance:
1. Get financial data for all three companies
2. Analyze recent news affecting the tech sector
3. Calculate comparative metrics and correlations
4. Recommend portfolio allocation weights""",
show_full_reasoning=True,
)
Agno Tracing via autolog

Token 用量

MLflow >= 3.3.0 支持 Agno 的 token 用量追踪。每个 Agent 调用的 token 用量将记录在 mlflow.chat.tokenUsage 属性中。整个追踪过程中的总 token 用量可在追踪信息对象的 token_usage 字段中找到。

python
# Get the trace object just created
last_trace_id = mlflow.get_last_active_trace_id()
trace = mlflow.get_trace(trace_id=last_trace_id)

# Print the token usage
total_usage = trace.info.token_usage
print("== Total token usage: ==")
print(f" Input tokens: {total_usage['input_tokens']}")
print(f" Output tokens: {total_usage['output_tokens']}")
print(f" Total tokens: {total_usage['total_tokens']}")

# Print the token usage for each LLM call
print("\n== Detailed usage for each LLM call: ==")
for span in trace.data.spans:
if usage := span.get_attribute("mlflow.chat.tokenUsage"):
print(f"{span.name}:")
print(f" Input tokens: {usage['input_tokens']}")
print(f" Output tokens: {usage['output_tokens']}")
print(f" Total tokens: {usage['total_tokens']}")
bash
== Total token usage: ==
Input tokens: 45710
Output tokens: 3844
Total tokens: 49554

== Detailed usage for each LLM call: ==
Team.run:
Input tokens: 45710
Output tokens: 3844
Total tokens: 49554

... (other modules)

禁用自动跟踪

可以通过调用 mlflow.agno.autolog(disable=True)mlflow.autolog(disable=True) 全局禁用 LiteLLM 的自动追踪。