跳到主要内容

为 LangChain 优化提示

LangChain Logo

本指南演示了如何利用 mlflow.genai.optimize_prompts()LangChain 自动增强您的链的提示。 mlflow.genai.optimize_prompts() API 是框架无关的,它使您能够使用最先进的技术,通过任何框架对链进行端到端的提示优化。有关该 API 的更多信息,请访问 优化提示

先决条件

bash
pip install -U langchain langchain-openai mlflow gepa litellm

设置您的 OpenAI API 密钥

bash
export OPENAI_API_KEY="your-api-key"

设置跟踪服务器和 MLflow 实验

python
import mlflow

mlflow.set_tracking_uri("https://:5000")
mlflow.set_experiment("LangChain Optimization")

基本示例

这是一个优化翻译链的完整示例。该示例展示了如何在 LangChain 工作流中轻松优化提示,只需进行最少的更改。

python
import mlflow
from mlflow.genai.scorers import Correctness
from mlflow.genai.optimize.optimizers import GepaPromptOptimizer
from langchain.agents import create_agent

# Step 1: Register your initial prompt
user_prompt = mlflow.genai.register_prompt(
name="translation-prompt",
template="Translate the following text from {{input_language}} to {{output_language}}: {{text}}",
)
system_prompt = mlflow.genai.register_prompt(
name="system-prompt",
template="You are a helpful assistant",
)


# Step 2: Create a prediction function
def predict_fn(input_language, output_language, text):
# Load prompt from registry
user_prompt = mlflow.genai.load_prompt("prompts:/translation-prompt@latest")
system_prompt = mlflow.genai.load_prompt("prompts:/system-prompt@latest")

agent = create_agent(
model="gpt-4o-mini",
system_prompt=system_prompt.template,
)

# Run the agent
response = agent.invoke(
{
"messages": [
{
"role": "user",
"content": user_prompt.format(
input_language=input_language,
output_language=output_language,
text=text,
),
}
]
}
)

return response["messages"][-1].content


# Step 3: Prepare training data
dataset = [
{
"inputs": {
"input_language": "English",
"output_language": "French",
"text": "Hello, how are you?",
},
"expectations": {"expected_response": "Bonjour, comment allez-vous?"},
},
{
"inputs": {
"input_language": "English",
"output_language": "Spanish",
"text": "Good morning",
},
"expectations": {"expected_response": "Buenos días"},
},
{
"inputs": {
"input_language": "English",
"output_language": "German",
"text": "Thank you very much",
},
"expectations": {"expected_response": "Vielen Dank"},
},
# more data...
]

# Step 4: Optimize the prompt
result = mlflow.genai.optimize_prompts(
predict_fn=predict_fn,
train_data=dataset,
prompt_uris=[user_prompt.uri],
optimizer=GepaPromptOptimizer(reflection_model="openai:/gpt-5"),
scorers=[Correctness(model="openai:/gpt-5")],
)

# Step 5: Use the optimized prompt
optimized_user_prompt = result.optimized_prompts[0]
print(f"Optimized prompt URI: {optimized_user_prompt.uri}")
print(f"Optimized template: {optimized_user_prompt.template}")

# Since your chain already uses @latest, it will automatically use the optimized prompt
predict_fn(
input_language="English",
output_language="Japanese",
text="Welcome to MLflow",
)