使用 MLflow 评估 LLM RAG 的示例 Notebook
在此 notebook 中,我们将演示如何使用 MLflow 评估各种 RAG 系统。
我们需要设置 OpenAI API 密钥。
为了安全地设置您的私钥,请务必通过当前实例的命令行终端导出您的密钥,或者为了永久添加到所有用户会话中,配置您首选的环境管理配置文件(例如 .bashrc、.zshrc)以包含以下条目
OPENAI_API_KEY=<您的 openai API 密钥>
如果使用 Azure OpenAI,则需要设置
OPENAI_API_TYPE="azure"
OPENAI_API_VERSION=<YYYY-MM-DD>
OPENAI_API_KEY=<https://<>.<>.<>.com>
OPENAI_API_DEPLOYMENT_NAME=<部署名称>
Notebook 兼容性
由于 langchain
等库变化很快,示例可能会很快过时并失效。为了演示目的,以下是推荐用于有效运行此 notebook 的关键依赖项
软件包 | 版本 |
---|---|
langchain | 0.1.16 |
lanchain-community | 0.0.33 |
langchain-openai | 0.0.8 |
openai | 1.12.0 |
mlflow | 2.12.1 |
chromadb | 0.4.24 |
如果您尝试使用不同版本执行此 notebook,它可能可以正常工作,但建议使用上述精确版本以确保您的代码正确执行。
创建一个 RAG 系统
使用 Langchain 和 Chroma 创建一个基于 MLflow 文档回答问题的 RAG 系统。
import pandas as pd
from langchain.chains import RetrievalQA
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain_openai import OpenAI, OpenAIEmbeddings
import mlflow
loader = WebBaseLoader("https://mlflow.org.cn/docs/latest/index.html")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0),
chain_type="stuff",
retriever=docsearch.as_retriever(),
return_source_documents=True,
)
使用 mlflow.evaluate()
评估 RAG 系统
创建一个简单的函数,将每个输入通过 RAG 链运行
def model(input_df):
answer = []
for index, row in input_df.iterrows():
answer.append(qa(row["questions"]))
return answer
创建评估数据集
eval_df = pd.DataFrame(
{
"questions": [
"What is MLflow?",
"How to run mlflow.evaluate()?",
"How to log_table()?",
"How to load_table()?",
],
}
)
创建忠实度指标
from mlflow.metrics.genai import EvaluationExample, faithfulness
# Create a good and bad example for faithfulness in the context of this problem
faithfulness_examples = [
EvaluationExample(
input="How do I disable MLflow autologging?",
output="mlflow.autolog(disable=True) will disable autologging for all functions. In Databricks, autologging is enabled by default. ",
score=2,
justification="The output provides a working solution, using the mlflow.autolog() function that is provided in the context.",
grading_context={
"context": "mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function."
},
),
EvaluationExample(
input="How do I disable MLflow autologging?",
output="mlflow.autolog(disable=True) will disable autologging for all functions.",
score=5,
justification="The output provides a solution that is using the mlflow.autolog() function that is provided in the context.",
grading_context={
"context": "mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function."
},
),
]
faithfulness_metric = faithfulness(model="openai:/gpt-4", examples=faithfulness_examples)
print(faithfulness_metric)
EvaluationMetric(name=faithfulness, greater_is_better=True, long_name=faithfulness, version=v1, metric_details= Task: You must return the following fields in your response one below the other: score: Your numerical score for the model's faithfulness based on the rubric justification: Your step-by-step reasoning about the model's faithfulness score You are an impartial judge. You will be given an input that was sent to a machine learning model, and you will be given an output that the model produced. You may also be given additional information that was used by the model to generate the output. Your task is to determine a numerical score called faithfulness based on the input and output. A definition of faithfulness and a grading rubric are provided below. You must use the grading rubric to determine your score. You must also justify your score. Examples could be included below for reference. Make sure to use them as references and to understand them before completing the task. Input: {input} Output: {output} {grading_context_columns} Metric definition: Faithfulness is only evaluated with the provided output and provided context, please ignore the provided input entirely when scoring faithfulness. Faithfulness assesses how much of the provided output is factually consistent with the provided context. A higher score indicates that a higher proportion of claims present in the output can be derived from the provided context. Faithfulness does not consider how much extra information from the context is not present in the output. Grading rubric: Faithfulness: Below are the details for different scores: - Score 1: None of the claims in the output can be inferred from the provided context. - Score 2: Some of the claims in the output can be inferred from the provided context, but the majority of the output is missing from, inconsistent with, or contradictory to the provided context. - Score 3: Half or more of the claims in the output can be inferred from the provided context. - Score 4: Most of the claims in the output can be inferred from the provided context, with very little information that is not directly supported by the provided context. - Score 5: All of the claims in the output are directly supported by the provided context, demonstrating high faithfulness to the provided context. Examples: Example Input: How do I disable MLflow autologging? Example Output: mlflow.autolog(disable=True) will disable autologging for all functions. In Databricks, autologging is enabled by default. Additional information used by the model: key: context value: mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function. Example score: 2 Example justification: The output provides a working solution, using the mlflow.autolog() function that is provided in the context. Example Input: How do I disable MLflow autologging? Example Output: mlflow.autolog(disable=True) will disable autologging for all functions. Additional information used by the model: key: context value: mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function. Example score: 5 Example justification: The output provides a solution that is using the mlflow.autolog() function that is provided in the context. You must return the following fields in your response one below the other: score: Your numerical score for the model's faithfulness based on the rubric justification: Your step-by-step reasoning about the model's faithfulness score )
创建相关性指标。您可以通过打印指标或访问指标的 metric_details
属性来查看完整的评分提示。
from mlflow.metrics.genai import EvaluationExample, relevance
relevance_metric = relevance(model="openai:/gpt-4")
print(relevance_metric)
EvaluationMetric(name=relevance, greater_is_better=True, long_name=relevance, version=v1, metric_details= Task: You must return the following fields in your response one below the other: score: Your numerical score for the model's relevance based on the rubric justification: Your step-by-step reasoning about the model's relevance score You are an impartial judge. You will be given an input that was sent to a machine learning model, and you will be given an output that the model produced. You may also be given additional information that was used by the model to generate the output. Your task is to determine a numerical score called relevance based on the input and output. A definition of relevance and a grading rubric are provided below. You must use the grading rubric to determine your score. You must also justify your score. Examples could be included below for reference. Make sure to use them as references and to understand them before completing the task. Input: {input} Output: {output} {grading_context_columns} Metric definition: Relevance encompasses the appropriateness, significance, and applicability of the output with respect to both the input and context. Scores should reflect the extent to which the output directly addresses the question provided in the input, given the provided context. Grading rubric: Relevance: Below are the details for different scores:- Score 1: The output doesn't mention anything about the question or is completely irrelevant to the provided context. - Score 2: The output provides some relevance to the question and is somehow related to the provided context. - Score 3: The output mostly answers the question and is largely consistent with the provided context. - Score 4: The output answers the question and is consistent with the provided context. - Score 5: The output answers the question comprehensively using the provided context. Examples: Example Input: How is MLflow related to Databricks? Example Output: Databricks is a data engineering and analytics platform designed to help organizations process and analyze large amounts of data. Databricks is a company specializing in big data and machine learning solutions. Additional information used by the model: key: context value: MLflow is an open-source platform for managing the end-to-end machine learning (ML) lifecycle. It was developed by Databricks, a company that specializes in big data and machine learning solutions. MLflow is designed to address the challenges that data scientists and machine learning engineers face when developing, training, and deploying machine learning models. Example score: 2 Example justification: The output provides relevant information about Databricks, mentioning it as a company specializing in big data and machine learning solutions. However, it doesn't directly address how MLflow is related to Databricks, which is the specific question asked in the input. Therefore, the output is only somewhat related to the provided context. Example Input: How is MLflow related to Databricks? Example Output: MLflow is a product created by Databricks to enhance the efficiency of machine learning processes. Additional information used by the model: key: context value: MLflow is an open-source platform for managing the end-to-end machine learning (ML) lifecycle. It was developed by Databricks, a company that specializes in big data and machine learning solutions. MLflow is designed to address the challenges that data scientists and machine learning engineers face when developing, training, and deploying machine learning models. Example score: 4 Example justification: The output provides a relevant and accurate statement about the relationship between MLflow and Databricks. While it doesn't provide extensive detail, it still offers a substantial and meaningful response. To achieve a score of 5, the response could be further improved by providing additional context or details about how MLflow specifically functions within the Databricks ecosystem. You must return the following fields in your response one below the other: score: Your numerical score for the model's relevance based on the rubric justification: Your step-by-step reasoning about the model's relevance score )
results = mlflow.evaluate(
model,
eval_df,
model_type="question-answering",
evaluators="default",
predictions="result",
extra_metrics=[faithfulness_metric, relevance_metric, mlflow.metrics.latency()],
evaluator_config={
"col_mapping": {
"inputs": "questions",
"context": "source_documents",
}
},
)
print(results.metrics)
2023/11/16 09:05:21 INFO mlflow.models.evaluation.base: Evaluating the model with the default evaluator. 2023/11/16 09:05:21 INFO mlflow.models.evaluation.default_evaluator: Computing model predictions. 2023/11/16 09:05:28 INFO mlflow.models.evaluation.default_evaluator: Testing metrics on first row... Using default facebook/roberta-hate-speech-dynabench-r4-target checkpoint
0%| | 0/1 [00:00<?, ?it/s]
0%| | 0/1 [00:00<?, ?it/s]
2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating builtin metrics: token_count 2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating builtin metrics: toxicity 2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating builtin metrics: flesch_kincaid_grade_level 2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating builtin metrics: ari_grade_level 2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating builtin metrics: exact_match 2023/11/16 09:05:58 INFO mlflow.models.evaluation.default_evaluator: Evaluating metrics: faithfulness
0%| | 0/4 [00:00<?, ?it/s]
2023/11/16 09:06:12 INFO mlflow.models.evaluation.default_evaluator: Evaluating metrics: relevance
0%| | 0/4 [00:00<?, ?it/s]
{'toxicity/v1/mean': 0.00022622970209340565, 'toxicity/v1/variance': 3.84291113351624e-09, 'toxicity/v1/p90': 0.0002859298692783341, 'toxicity/v1/ratio': 0.0, 'flesch_kincaid_grade_level/v1/mean': 8.1, 'flesch_kincaid_grade_level/v1/variance': 8.815, 'flesch_kincaid_grade_level/v1/p90': 11.48, 'ari_grade_level/v1/mean': 11.649999999999999, 'ari_grade_level/v1/variance': 19.527499999999993, 'ari_grade_level/v1/p90': 16.66, 'faithfulness/v1/mean': 4.0, 'faithfulness/v1/variance': 3.0, 'faithfulness/v1/p90': 5.0, 'relevance/v1/mean': 4.5, 'relevance/v1/variance': 0.25, 'relevance/v1/p90': 5.0}
results.tables["eval_results_table"]
Downloading artifacts: 0%| | 0/1 [00:00<?, ?it/s]
问题 | 输出 | 源文档 | 延迟 | token 计数 | 毒性/v1/得分 | Flesch-Kincaid 阅读难度等级/v1/得分 | ARI 阅读难度等级/v1/得分 | 忠实度/v1/得分 | 忠实度/v1/理由 | 相关性/v1/得分 | 相关性/v1/理由 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 什么是 MLflow? | MLflow 是一个开源平台,专门用于构建... | [{'lc_attributes': {}, 'lc_namespace': ['langc... | 1.989822 | 53 | 0.000137 | 12.5 | 18.4 | 5 | 模型提供的输出是直接提取的... | 5 | 该输出提供了一个全面的答案,回答了... |
1 | 如何运行 mlflow.evaluate()? | mlflow.evaluate() API 允许您验证... | [{'lc_attributes': {}, 'lc_namespace': ['langc... | 1.945368 | 55 | 0.000200 | 9.1 | 12.6 | 5 | 模型提供的输出完全... | 4 | 该输出提供了一个相关且准确的解释... |
2 | 如何使用 log_table()? | 您可以使用 MLflow 的 log... 函数记录一个表... | [{'lc_attributes': {}, 'lc_namespace': ['langc... | 1.521511 | 32 | 0.000289 | 5.0 | 6.8 | 1 | 输出声称您可以使用 MLflow 的 log... 函数记录一个表... | 5 | 该输出提供了一个全面的答案,回答了... |
3 | 如何使用 load_table()? | 您不能使用 MLflow 的 load_table() 函数。MLflow 是... | [{'lc_attributes': {}, 'lc_namespace': ['langc... | 1.105279 | 27 | 0.000279 | 5.8 | 8.8 | 5 | 输出声称“您不能使用 load_table() 函数...” | 4 | 该输出提供了一个相关且准确的回复... |